This is part one of a learning series of pyspark, which is a python binding to the spark program written in Scala.
The installation is pretty simple. These steps were done on Mac OS Mavericks but should work for Linux too. Here are the steps for the installation:
Spark : http://spark.apache.org/downloads.html
Scala : http://www.scala-lang.org/download/
Dont use Latest Version of Scala, Use Scala 2.10.x
export SCALA_HOME=your_path_to_scala
export SPARK_HOME=your_path_to_spark
brew install sbt
cd $SPARK_HOME
sbt/sbt assembly
$SPARK_HOME/bin/pyspark
And Voila. You are running pyspark on your Machine
To check that everything is properly installed, Lets run a simple program:
test = sc.parallelize([1,2,3])
test.count()
This should return 3. So Now Just Run Hadoop On your Machine and then run pyspark Using:
cd /usr/local/hadoop/
bin/start-all.sh
jps
$SPARK_HOME/bin/pyspark