Pattern matching Scala (key, Tuple2) values in reduceByKey() for Apache Spark -


i have rdd containing (stockname, stockvalue) tuples. many of stocks repeated , have differing values.

e.g. ("arm", 200.6) ("goog", 4000.4) ("arm", 3998.23) ("arm", 4002.45) etc.

the idea collect stocks , calculate averages.

in code below map transforms each stock (key, (total, 1))

e.g. ("arm", (200.6, 1))

the reducebykey aggregates stocks same name , independently sums values , counts. making it's easy calculate average each stock (code not shown).

val partial = stocks.map{ case(stock: string, value: double) => (stock, (value, 1)) } .reducebykey( (x, y) => (x._1 + y._1, x._2 + y._2) )

in map i've been able use pattern matching express transformation. i'd able same thing function argument passed reducebykey in order make more readable.

so far i've not been able improve on (x, y) => (x._1 + y._1, x._2 + y._2).

any suggestions?

you can nest patterns deconstruct (x, y) ((x1, x2), (y1, y2))

val partial = stocks.map {   case (stock, value) => stock -> (value, 1) }.reducebykey {   case ((value1, count1), (value2, count2)) => (value1 + value2, count1 + count2) } 

Comments

Popular posts from this blog

toolbar - How to add link to user registration inside toobar in admin joomla 3 custom component -

linux - disk space limitation when creating war file -

How to provide Authorization & Authentication using Asp.net, C#? -