Stock Analysis with Stock dataset using Sprak

Below is sample dataset to do analysis of Stocks

Input Data: here we have to find out maximum selling price of all Stocks

ABCSEB6J2009-08-147.937.947.704.55646007.68
ABCSEB8J2009-08-147.937.947.706.85646007.68
ABCSEB9J2009-08-147.937.947.708.85646007.68
ABCSEA7J2009-08-147.937.947.709.85646007.68
ABCSES7J2009-08-147.937.947.701.85646007.68
ABCSED7J2009-08-147.937.947.702.85646007.68
ABCSEF7J2009-08-147.937.947.703.85646007.68
ABCSEG7J2009-08-147.937.947.7011.85646007.68
ABCSEH7J2009-08-147.937.947.7012.85646007.68
ABCSEB7J2009-08-147.937.947.707.85646007.68
ABCSEB6J2009-08-147.937.947.704.55646007.68

package com.dpq.stocks.driver;

 

import java.util.Iterator;
import java.util.LinkedList;
import java.util.List;

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function2;
import org.apache.spark.api.java.function.PairFlatMapFunction;

import scala.Tuple2;

public class StockAnalysis {

public static void main(String[] args) {
SparkConf conf = new SparkConf().setAppName(“RDD_EXAMPLES”).setMaster(“local”);

JavaSparkContext sc = new JavaSparkContext(conf);
//Country,Reporting_purpuse,Quarter,Grain,pp0,pp1
JavaRDD<String> fileRdd = sc.textFile(“/Users/dpq/springbootWrokspace/StocksAnalysisBySpark/resources/Stocks.csv”,4);
System.out.println(fileRdd.collect());
JavaRDD<Tuple2<String,Double>> stockWithPrice =
fileRdd.map(data -> new Tuple2<String, Double >(data.split(“,”)[1] ,Double.parseDouble(data.split(“,”)[6] )) );

//displaying all content s
System.out.println(“stockWithPrice:”+stockWithPrice.collect());

JavaPairRDD<String, Double> stocksPairRDD = stockWithPrice.flatMapToPair(new PairFlatMapFunction<Tuple2<String,Double>, String, Double>(){
/**
*
*/
private static final long serialVersionUID = 1L;
List<Tuple2<String, Double>> tuplePairs = new LinkedList<>();
@Override
public Iterator<Tuple2<String, Double>> call(Tuple2<String, Double> tuple) throws Exception {
tuplePairs.add(tuple);
return tuplePairs.iterator();
}
});

System.out.println(“stocksPairRDD:”+stocksPairRDD.collect());

JavaPairRDD<String, Double> stocksAnalysisRDD = stocksPairRDD.reduceByKey(new Function2<Double, Double, Double>() {

@Override
public Double call(Double arg0, Double arg1) throws Exception {

return arg0>arg1?arg0:arg1;
}
});
System.out.println(“OUTPUT:”+stocksAnalysisRDD.glom().collect());

}

}

Please refer complete code below

https://github.com/dpq1422/StockAnalysisBySpark

Output:

Using Spark’s default log4j profile: org/apache/spark/log4j-defaults.properties
21/10/01 01:26:44 WARN Utils: Your hostname, MacBook-Pro.local resolves to a loopback address: 127.0.0.1; using 192.168.1.4 instead (on interface en0)
21/10/01 01:26:44 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/Users/dpq/.m2/repository/org/apache/spark/spark-unsafe_2.12/3.1.2/spark-unsafe_2.12-3.1.2.jar) to constructor java.nio.DirectByteBuffer(long,int)
WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform
WARNING: Use –illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
21/10/01 01:26:44 INFO SparkContext: Running Spark version 3.1.2
21/10/01 01:26:44 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
21/10/01 01:26:45 INFO ResourceUtils: ==============================================================
21/10/01 01:26:45 INFO ResourceUtils: No custom resources configured for spark.driver.
21/10/01 01:26:45 INFO ResourceUtils: ==============================================================
21/10/01 01:26:45 INFO SparkContext: Submitted application: RDD_EXAMPLES
21/10/01 01:26:45 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)
21/10/01 01:26:45 INFO ResourceProfile: Limiting resource is cpu
21/10/01 01:26:45 INFO ResourceProfileManager: Added ResourceProfile id: 0
21/10/01 01:26:45 INFO SecurityManager: Changing view acls to: dpq
21/10/01 01:26:45 INFO SecurityManager: Changing modify acls to: dpq
21/10/01 01:26:45 INFO SecurityManager: Changing view acls groups to:
21/10/01 01:26:45 INFO SecurityManager: Changing modify acls groups to:
21/10/01 01:26:45 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(dpq); groups with view permissions: Set(); users with modify permissions: Set(dpq); groups with modify permissions: Set()
21/10/01 01:26:45 INFO Utils: Successfully started service ‘sparkDriver’ on port 49919.
21/10/01 01:26:45 INFO SparkEnv: Registering MapOutputTracker
21/10/01 01:26:45 INFO SparkEnv: Registering BlockManagerMaster
21/10/01 01:26:45 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
21/10/01 01:26:45 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
21/10/01 01:26:45 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
21/10/01 01:26:45 INFO DiskBlockManager: Created local directory at /private/var/folders/r_/b4vyn5dj269_2h9rqh1gcdrr0000gn/T/blockmgr-9448d62c-d28e-4276-830c-e6720f0654c9
21/10/01 01:26:45 INFO MemoryStore: MemoryStore started with capacity 1048.8 MiB
21/10/01 01:26:45 INFO SparkEnv: Registering OutputCommitCoordinator
21/10/01 01:26:45 INFO Utils: Successfully started service ‘SparkUI’ on port 4040.
21/10/01 01:26:45 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.1.4:4040
21/10/01 01:26:45 INFO Executor: Starting executor ID driver on host 192.168.1.4
21/10/01 01:26:45 INFO Utils: Successfully started service ‘org.apache.spark.network.netty.NettyBlockTransferService’ on port 49920.
21/10/01 01:26:45 INFO NettyBlockTransferService: Server created on 192.168.1.4:49920
21/10/01 01:26:45 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
21/10/01 01:26:45 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.1.4, 49920, None)
21/10/01 01:26:46 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.4:49920 with 1048.8 MiB RAM, BlockManagerId(driver, 192.168.1.4, 49920, None)
21/10/01 01:26:46 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.1.4, 49920, None)
21/10/01 01:26:46 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.1.4, 49920, None)
21/10/01 01:26:46 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 175.9 KiB, free 1048.6 MiB)
21/10/01 01:26:46 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 27.1 KiB, free 1048.6 MiB)
21/10/01 01:26:46 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.1.4:49920 (size: 27.1 KiB, free: 1048.8 MiB)
21/10/01 01:26:46 INFO SparkContext: Created broadcast 0 from textFile at StockAnalysis.java:25
21/10/01 01:26:46 INFO FileInputFormat: Total input files to process : 1
21/10/01 01:26:46 INFO SparkContext: Starting job: collect at StockAnalysis.java:26
21/10/01 01:26:46 INFO DAGScheduler: Got job 0 (collect at StockAnalysis.java:26) with 4 output partitions
21/10/01 01:26:46 INFO DAGScheduler: Final stage: ResultStage 0 (collect at StockAnalysis.java:26)
21/10/01 01:26:46 INFO DAGScheduler: Parents of final stage: List()
21/10/01 01:26:46 INFO DAGScheduler: Missing parents: List()
21/10/01 01:26:46 INFO DAGScheduler: Submitting ResultStage 0 (/Users/dpq/springbootWrokspace/StocksAnalysisBySpark/resources/Stocks.csv MapPartitionsRDD[1] at textFile at StockAnalysis.java:25), which has no missing parents
21/10/01 01:26:46 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 4.1 KiB, free 1048.6 MiB)
21/10/01 01:26:46 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.4 KiB, free 1048.6 MiB)
21/10/01 01:26:46 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.1.4:49920 (size: 2.4 KiB, free: 1048.8 MiB)
21/10/01 01:26:46 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1388
21/10/01 01:26:46 INFO DAGScheduler: Submitting 4 missing tasks from ResultStage 0 (/Users/dpq/springbootWrokspace/StocksAnalysisBySpark/resources/Stocks.csv MapPartitionsRDD[1] at textFile at StockAnalysis.java:25) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
21/10/01 01:26:46 INFO TaskSchedulerImpl: Adding task set 0.0 with 4 tasks resource profile 0
21/10/01 01:26:46 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0) (192.168.1.4, executor driver, partition 0, PROCESS_LOCAL, 4547 bytes) taskResourceAssignments Map()
21/10/01 01:26:46 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
21/10/01 01:26:47 INFO HadoopRDD: Input split: file:/Users/dpq/springbootWrokspace/StocksAnalysisBySpark/resources/Stocks.csv:0+783
21/10/01 01:26:47 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1792 bytes result sent to driver
21/10/01 01:26:47 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1) (192.168.1.4, executor driver, partition 1, PROCESS_LOCAL, 4547 bytes) taskResourceAssignments Map()
21/10/01 01:26:47 INFO Executor: Running task 1.0 in stage 0.0 (TID 1)
21/10/01 01:26:47 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 124 ms on 192.168.1.4 (executor driver) (1/4)
21/10/01 01:26:47 INFO HadoopRDD: Input split: file:/Users/dpq/springbootWrokspace/StocksAnalysisBySpark/resources/Stocks.csv:783+783
21/10/01 01:26:47 INFO Executor: Finished task 1.0 in stage 0.0 (TID 1). 1783 bytes result sent to driver
21/10/01 01:26:47 INFO TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2) (192.168.1.4, executor driver, partition 2, PROCESS_LOCAL, 4547 bytes) taskResourceAssignments Map()
21/10/01 01:26:47 INFO Executor: Running task 2.0 in stage 0.0 (TID 2)
21/10/01 01:26:47 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 25 ms on 192.168.1.4 (executor driver) (2/4)
21/10/01 01:26:47 INFO HadoopRDD: Input split: file:/Users/dpq/springbootWrokspace/StocksAnalysisBySpark/resources/Stocks.csv:1566+783
21/10/01 01:26:47 INFO Executor: Finished task 2.0 in stage 0.0 (TID 2). 1782 bytes result sent to driver
21/10/01 01:26:47 INFO TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3) (192.168.1.4, executor driver, partition 3, PROCESS_LOCAL, 4547 bytes) taskResourceAssignments Map()
21/10/01 01:26:47 INFO TaskSetManager: Finished task 2.0 in stage 0.0 (TID 2) in 16 ms on 192.168.1.4 (executor driver) (3/4)
21/10/01 01:26:47 INFO Executor: Running task 3.0 in stage 0.0 (TID 3)
21/10/01 01:26:47 INFO HadoopRDD: Input split: file:/Users/dpq/springbootWrokspace/StocksAnalysisBySpark/resources/Stocks.csv:2349+783
21/10/01 01:26:47 INFO Executor: Finished task 3.0 in stage 0.0 (TID 3). 1686 bytes result sent to driver
21/10/01 01:26:47 INFO TaskSetManager: Finished task 3.0 in stage 0.0 (TID 3) in 14 ms on 192.168.1.4 (executor driver) (4/4)
21/10/01 01:26:47 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
21/10/01 01:26:47 INFO DAGScheduler: ResultStage 0 (collect at StockAnalysis.java:26) finished in 0.240 s
21/10/01 01:26:47 INFO DAGScheduler: Job 0 is finished. Cancelling potential speculative or zombie tasks for this job
21/10/01 01:26:47 INFO TaskSchedulerImpl: Killing all running tasks in stage 0: Stage finished
21/10/01 01:26:47 INFO DAGScheduler: Job 0 finished: collect at StockAnalysis.java:26, took 0.276834 s
[ABCSE,B7J,2009-08-14,7.93,7.94,7.70,7.85,64600,7.68, ABCSE,B6J,2009-08-14,7.93,7.94,7.70,4.55,64600,7.68, ABCSE,B8J,2009-08-14,7.93,7.94,7.70,6.85,64600,7.68, ABCSE,B9J,2009-08-14,7.93,7.94,7.70,8.85,64600,7.68, ABCSE,A7J,2009-08-14,7.93,7.94,7.70,9.85,64600,7.68, ABCSE,S7J,2009-08-14,7.93,7.94,7.70,1.85,64600,7.68, ABCSE,D7J,2009-08-14,7.93,7.94,7.70,2.85,64600,7.68, ABCSE,F7J,2009-08-14,7.93,7.94,7.70,3.85,64600,7.68, ABCSE,G7J,2009-08-14,7.93,7.94,7.70,11.85,64600,7.68, ABCSE,H7J,2009-08-14,7.93,7.94,7.70,12.85,64600,7.68, ABCSE,B7J,2009-08-14,7.93,7.94,7.70,7.85,64600,7.68, ABCSE,B6J,2009-08-14,7.93,7.94,7.70,4.55,64600,7.68, ABCSE,B8J,2009-08-14,7.93,7.94,7.70,6.85,64600,7.68, ABCSE,B9J,2009-08-14,7.93,7.94,7.70,8.85,64600,7.68, ABCSE,A7J,2009-08-14,7.93,7.94,7.70,9.85,64600,7.68, ABCSE,S7J,2009-08-14,7.93,7.94,7.70,1.85,64600,7.68, ABCSE,D7J,2009-08-14,7.93,7.94,7.70,2.85,64600,7.68, ABCSE,F7J,2009-08-14,7.93,7.94,7.70,3.85,64600,7.68, ABCSE,G7J,2009-08-14,7.93,7.94,7.70,11.85,64600,7.68, ABCSE,H7J,2009-08-14,7.93,7.94,7.70,12.85,64600,7.68, ABCSE,B7J,2009-08-14,7.93,7.94,7.70,7.85,64600,7.68, ABCSE,B6J,2009-08-14,7.93,7.94,7.70,4.55,64600,7.68, ABCSE,B8J,2009-08-14,7.93,7.94,7.70,6.85,64600,7.68, ABCSE,B9J,2009-08-14,7.93,7.94,7.70,8.85,64600,7.68, ABCSE,A7J,2009-08-14,7.93,7.94,7.70,9.85,64600,7.68, ABCSE,S7J,2009-08-14,7.93,7.94,7.70,1.85,64600,7.68, ABCSE,D7J,2009-08-14,7.93,7.94,7.70,2.85,64600,7.68, ABCSE,F7J,2009-08-14,7.93,7.94,7.70,3.85,64600,7.68, ABCSE,G7J,2009-08-14,7.93,7.94,7.70,11.85,64600,7.68, ABCSE,H7J,2009-08-14,7.93,7.94,7.70,12.85,64600,7.68, ABCSE,B7J,2009-08-14,7.93,7.94,7.70,7.85,64600,7.68, ABCSE,B6J,2009-08-14,7.93,7.94,7.70,4.55,64600,7.68, ABCSE,B8J,2009-08-14,7.93,7.94,7.70,6.85,64600,7.68, ABCSE,B9J,2009-08-14,7.93,7.94,7.70,8.85,64600,7.68, ABCSE,A7J,2009-08-14,7.93,7.94,7.70,9.85,64600,7.68, ABCSE,S7J,2009-08-14,7.93,7.94,7.70,1.85,64600,7.68, ABCSE,D7J,2009-08-14,7.93,7.94,7.70,2.85,64600,7.68, ABCSE,F7J,2009-08-14,7.93,7.94,7.70,3.85,64600,7.68, ABCSE,G7J,2009-08-14,7.93,7.94,7.70,11.85,64600,7.68, ABCSE,H7J,2009-08-14,7.93,7.94,7.70,12.85,64600,7.68, ABCSE,B7J,2009-08-14,7.93,7.94,7.70,7.85,64600,7.68, ABCSE,B6J,2009-08-14,7.93,7.94,7.70,4.55,64600,7.68, ABCSE,B8J,2009-08-14,7.93,7.94,7.70,6.85,64600,7.68, ABCSE,B9J,2009-08-14,7.93,7.94,7.70,8.85,64600,7.68, ABCSE,A7J,2009-08-14,7.93,7.94,7.70,9.85,64600,7.68, ABCSE,S7J,2009-08-14,7.93,7.94,7.70,15.85,64600,7.68, ABCSE,D7J,2009-08-14,7.93,7.94,7.70,2.85,64600,7.68, ABCSE,F7J,2009-08-14,7.93,7.94,7.70,3.85,64600,7.68, ABCSE,G7J,2009-08-14,7.93,7.94,7.70,11.85,64600,7.68, ABCSE,H7J,2009-08-14,7.93,7.94,7.70,12.85,64600,7.68, ABCSE,B7J,2009-08-14,7.93,7.94,7.70,7.85,64600,7.68, ABCSE,B6J,2009-08-14,7.93,7.94,7.70,4.55,64600,7.68, ABCSE,B8J,2009-08-14,7.93,7.94,7.70,6.85,64600,7.68, ABCSE,B9J,2009-08-14,7.93,7.94,7.70,8.85,64600,7.68, ABCSE,A7J,2009-08-14,7.93,7.94,7.70,9.85,64600,7.68, ABCSE,S7J,2009-08-14,7.93,7.94,7.70,1.85,64600,7.68, ABCSE,D7J,2009-08-14,7.93,7.94,7.70,2.85,64600,7.68, ABCSE,F7J,2009-08-14,7.93,7.94,7.70,3.85,64600,7.68, ABCSE,G7J,2009-08-14,7.93,7.94,7.70,11.85,64600,7.68, ABCSE,H7J,2009-08-14,7.93,7.94,7.70,12.85,64600,7.68]
21/10/01 01:26:47 INFO SparkContext: Starting job: collect at StockAnalysis.java:31
21/10/01 01:26:47 INFO DAGScheduler: Got job 1 (collect at StockAnalysis.java:31) with 4 output partitions
21/10/01 01:26:47 INFO DAGScheduler: Final stage: ResultStage 1 (collect at StockAnalysis.java:31)
21/10/01 01:26:47 INFO DAGScheduler: Parents of final stage: List()
21/10/01 01:26:47 INFO DAGScheduler: Missing parents: List()
21/10/01 01:26:47 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[2] at map at StockAnalysis.java:28), which has no missing parents
21/10/01 01:26:47 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 5.0 KiB, free 1048.6 MiB)
21/10/01 01:26:47 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 2.8 KiB, free 1048.6 MiB)
21/10/01 01:26:47 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 192.168.1.4:49920 (size: 2.8 KiB, free: 1048.8 MiB)
21/10/01 01:26:47 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:1388
21/10/01 01:26:47 INFO DAGScheduler: Submitting 4 missing tasks from ResultStage 1 (MapPartitionsRDD[2] at map at StockAnalysis.java:28) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
21/10/01 01:26:47 INFO TaskSchedulerImpl: Adding task set 1.0 with 4 tasks resource profile 0
21/10/01 01:26:47 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 4) (192.168.1.4, executor driver, partition 0, PROCESS_LOCAL, 4547 bytes) taskResourceAssignments Map()
21/10/01 01:26:47 INFO Executor: Running task 0.0 in stage 1.0 (TID 4)
21/10/01 01:26:47 INFO HadoopRDD: Input split: file:/Users/dpq/springbootWrokspace/StocksAnalysisBySpark/resources/Stocks.csv:0+783
21/10/01 01:26:47 INFO Executor: Finished task 0.0 in stage 1.0 (TID 4). 1467 bytes result sent to driver
21/10/01 01:26:47 INFO TaskSetManager: Starting task 1.0 in stage 1.0 (TID 5) (192.168.1.4, executor driver, partition 1, PROCESS_LOCAL, 4547 bytes) taskResourceAssignments Map()
21/10/01 01:26:47 INFO Executor: Running task 1.0 in stage 1.0 (TID 5)
21/10/01 01:26:47 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 4) in 16 ms on 192.168.1.4 (executor driver) (1/4)
21/10/01 01:26:47 INFO HadoopRDD: Input split: file:/Users/dpq/springbootWrokspace/StocksAnalysisBySpark/resources/Stocks.csv:783+783
21/10/01 01:26:47 INFO Executor: Finished task 1.0 in stage 1.0 (TID 5). 1441 bytes result sent to driver
21/10/01 01:26:47 INFO TaskSetManager: Starting task 2.0 in stage 1.0 (TID 6) (192.168.1.4, executor driver, partition 2, PROCESS_LOCAL, 4547 bytes) taskResourceAssignments Map()
21/10/01 01:26:47 INFO TaskSetManager: Finished task 1.0 in stage 1.0 (TID 5) in 11 ms on 192.168.1.4 (executor driver) (2/4)
21/10/01 01:26:47 INFO Executor: Running task 2.0 in stage 1.0 (TID 6)
21/10/01 01:26:47 INFO HadoopRDD: Input split: file:/Users/dpq/springbootWrokspace/StocksAnalysisBySpark/resources/Stocks.csv:1566+783
21/10/01 01:26:47 INFO Executor: Finished task 2.0 in stage 1.0 (TID 6). 1441 bytes result sent to driver
21/10/01 01:26:47 INFO TaskSetManager: Starting task 3.0 in stage 1.0 (TID 7) (192.168.1.4, executor driver, partition 3, PROCESS_LOCAL, 4547 bytes) taskResourceAssignments Map()
21/10/01 01:26:47 INFO TaskSetManager: Finished task 2.0 in stage 1.0 (TID 6) in 10 ms on 192.168.1.4 (executor driver) (3/4)
21/10/01 01:26:47 INFO Executor: Running task 3.0 in stage 1.0 (TID 7)
21/10/01 01:26:47 INFO HadoopRDD: Input split: file:/Users/dpq/springbootWrokspace/StocksAnalysisBySpark/resources/Stocks.csv:2349+783
21/10/01 01:26:47 INFO Executor: Finished task 3.0 in stage 1.0 (TID 7). 1415 bytes result sent to driver
21/10/01 01:26:47 INFO TaskSetManager: Finished task 3.0 in stage 1.0 (TID 7) in 13 ms on 192.168.1.4 (executor driver) (4/4)
21/10/01 01:26:47 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool
21/10/01 01:26:47 INFO DAGScheduler: ResultStage 1 (collect at StockAnalysis.java:31) finished in 0.056 s
21/10/01 01:26:47 INFO DAGScheduler: Job 1 is finished. Cancelling potential speculative or zombie tasks for this job
21/10/01 01:26:47 INFO TaskSchedulerImpl: Killing all running tasks in stage 1: Stage finished
21/10/01 01:26:47 INFO DAGScheduler: Job 1 finished: collect at StockAnalysis.java:31, took 0.066339 s
stockWithPrice:[(B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,15.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85)]
21/10/01 01:26:47 INFO SparkContext: Starting job: collect at StockAnalysis.java:46
21/10/01 01:26:47 INFO DAGScheduler: Got job 2 (collect at StockAnalysis.java:46) with 4 output partitions
21/10/01 01:26:47 INFO DAGScheduler: Final stage: ResultStage 2 (collect at StockAnalysis.java:46)
21/10/01 01:26:47 INFO DAGScheduler: Parents of final stage: List()
21/10/01 01:26:47 INFO DAGScheduler: Missing parents: List()
21/10/01 01:26:47 INFO DAGScheduler: Submitting ResultStage 2 (MapPartitionsRDD[3] at flatMapToPair at StockAnalysis.java:33), which has no missing parents
21/10/01 01:26:47 INFO MemoryStore: Block broadcast_3 stored as values in memory (estimated size 5.6 KiB, free 1048.6 MiB)
21/10/01 01:26:47 INFO MemoryStore: Block broadcast_3_piece0 stored as bytes in memory (estimated size 3.0 KiB, free 1048.6 MiB)
21/10/01 01:26:47 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on 192.168.1.4:49920 (size: 3.0 KiB, free: 1048.8 MiB)
21/10/01 01:26:47 INFO SparkContext: Created broadcast 3 from broadcast at DAGScheduler.scala:1388
21/10/01 01:26:47 INFO DAGScheduler: Submitting 4 missing tasks from ResultStage 2 (MapPartitionsRDD[3] at flatMapToPair at StockAnalysis.java:33) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
21/10/01 01:26:47 INFO TaskSchedulerImpl: Adding task set 2.0 with 4 tasks resource profile 0
21/10/01 01:26:47 INFO TaskSetManager: Starting task 0.0 in stage 2.0 (TID 8) (192.168.1.4, executor driver, partition 0, PROCESS_LOCAL, 4547 bytes) taskResourceAssignments Map()
21/10/01 01:26:47 INFO Executor: Running task 0.0 in stage 2.0 (TID 8)
21/10/01 01:26:47 INFO HadoopRDD: Input split: file:/Users/dpq/springbootWrokspace/StocksAnalysisBySpark/resources/Stocks.csv:0+783
21/10/01 01:26:47 INFO Executor: Finished task 0.0 in stage 2.0 (TID 8). 2065 bytes result sent to driver
21/10/01 01:26:47 INFO TaskSetManager: Starting task 1.0 in stage 2.0 (TID 9) (192.168.1.4, executor driver, partition 1, PROCESS_LOCAL, 4547 bytes) taskResourceAssignments Map()
21/10/01 01:26:47 INFO Executor: Running task 1.0 in stage 2.0 (TID 9)
21/10/01 01:26:47 INFO TaskSetManager: Finished task 0.0 in stage 2.0 (TID 8) in 23 ms on 192.168.1.4 (executor driver) (1/4)
21/10/01 01:26:47 INFO HadoopRDD: Input split: file:/Users/dpq/springbootWrokspace/StocksAnalysisBySpark/resources/Stocks.csv:783+783
21/10/01 01:26:47 INFO Executor: Finished task 1.0 in stage 2.0 (TID 9). 1964 bytes result sent to driver
21/10/01 01:26:47 INFO TaskSetManager: Starting task 2.0 in stage 2.0 (TID 10) (192.168.1.4, executor driver, partition 2, PROCESS_LOCAL, 4547 bytes) taskResourceAssignments Map()
21/10/01 01:26:47 INFO Executor: Running task 2.0 in stage 2.0 (TID 10)
21/10/01 01:26:47 INFO TaskSetManager: Finished task 1.0 in stage 2.0 (TID 9) in 11 ms on 192.168.1.4 (executor driver) (2/4)
21/10/01 01:26:47 INFO HadoopRDD: Input split: file:/Users/dpq/springbootWrokspace/StocksAnalysisBySpark/resources/Stocks.csv:1566+783
21/10/01 01:26:47 INFO Executor: Finished task 2.0 in stage 2.0 (TID 10). 1964 bytes result sent to driver
21/10/01 01:26:47 INFO TaskSetManager: Starting task 3.0 in stage 2.0 (TID 11) (192.168.1.4, executor driver, partition 3, PROCESS_LOCAL, 4547 bytes) taskResourceAssignments Map()
21/10/01 01:26:47 INFO Executor: Running task 3.0 in stage 2.0 (TID 11)
21/10/01 01:26:47 INFO TaskSetManager: Finished task 2.0 in stage 2.0 (TID 10) in 8 ms on 192.168.1.4 (executor driver) (3/4)
21/10/01 01:26:47 INFO HadoopRDD: Input split: file:/Users/dpq/springbootWrokspace/StocksAnalysisBySpark/resources/Stocks.csv:2349+783
21/10/01 01:26:47 INFO Executor: Finished task 3.0 in stage 2.0 (TID 11). 1866 bytes result sent to driver
21/10/01 01:26:47 INFO TaskSetManager: Finished task 3.0 in stage 2.0 (TID 11) in 8 ms on 192.168.1.4 (executor driver) (4/4)
21/10/01 01:26:47 INFO TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool
21/10/01 01:26:47 INFO DAGScheduler: ResultStage 2 (collect at StockAnalysis.java:46) finished in 0.059 s
21/10/01 01:26:47 INFO DAGScheduler: Job 2 is finished. Cancelling potential speculative or zombie tasks for this job
21/10/01 01:26:47 INFO TaskSchedulerImpl: Killing all running tasks in stage 2: Stage finished
21/10/01 01:26:47 INFO DAGScheduler: Job 2 finished: collect at StockAnalysis.java:46, took 0.063513 s
stocksPairRDD:[(B7J,7.85), (B7J,7.85), (B6J,4.55), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (D7J,2.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (D7J,2.85), (F7J,3.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (B6J,4.55), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (D7J,2.85), (D7J,2.85), (F7J,3.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (B6J,4.55), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (D7J,2.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (D7J,2.85), (F7J,3.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (B6J,4.55), (B6J,4.55), (B8J,6.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (D7J,2.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (D7J,2.85), (F7J,3.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (B6J,4.55), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,15.85), (D7J,2.85), (D7J,2.85), (F7J,3.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (B6J,4.55), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (D7J,2.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (D7J,2.85), (F7J,3.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85), (B7J,7.85), (B6J,4.55), (B8J,6.85), (B9J,8.85), (A7J,9.85), (S7J,1.85), (D7J,2.85), (F7J,3.85), (G7J,11.85), (H7J,12.85)]
21/10/01 01:26:47 INFO SparkContext: Starting job: collect at StockAnalysis.java:57
21/10/01 01:26:47 INFO DAGScheduler: Registering RDD 3 (flatMapToPair at StockAnalysis.java:33) as input to shuffle 0
21/10/01 01:26:47 INFO DAGScheduler: Got job 3 (collect at StockAnalysis.java:57) with 4 output partitions
21/10/01 01:26:47 INFO DAGScheduler: Final stage: ResultStage 4 (collect at StockAnalysis.java:57)
21/10/01 01:26:47 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 3)
21/10/01 01:26:47 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 3)
21/10/01 01:26:47 INFO DAGScheduler: Submitting ShuffleMapStage 3 (MapPartitionsRDD[3] at flatMapToPair at StockAnalysis.java:33), which has no missing parents
21/10/01 01:26:47 INFO MemoryStore: Block broadcast_4 stored as values in memory (estimated size 7.5 KiB, free 1048.6 MiB)
21/10/01 01:26:47 INFO MemoryStore: Block broadcast_4_piece0 stored as bytes in memory (estimated size 4.1 KiB, free 1048.6 MiB)
21/10/01 01:26:47 INFO BlockManagerInfo: Added broadcast_4_piece0 in memory on 192.168.1.4:49920 (size: 4.1 KiB, free: 1048.8 MiB)
21/10/01 01:26:47 INFO SparkContext: Created broadcast 4 from broadcast at DAGScheduler.scala:1388
21/10/01 01:26:47 INFO DAGScheduler: Submitting 4 missing tasks from ShuffleMapStage 3 (MapPartitionsRDD[3] at flatMapToPair at StockAnalysis.java:33) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
21/10/01 01:26:47 INFO TaskSchedulerImpl: Adding task set 3.0 with 4 tasks resource profile 0
21/10/01 01:26:47 INFO TaskSetManager: Starting task 0.0 in stage 3.0 (TID 12) (192.168.1.4, executor driver, partition 0, PROCESS_LOCAL, 4536 bytes) taskResourceAssignments Map()
21/10/01 01:26:47 INFO Executor: Running task 0.0 in stage 3.0 (TID 12)
21/10/01 01:26:47 INFO HadoopRDD: Input split: file:/Users/dpq/springbootWrokspace/StocksAnalysisBySpark/resources/Stocks.csv:0+783
21/10/01 01:26:47 INFO Executor: Finished task 0.0 in stage 3.0 (TID 12). 1249 bytes result sent to driver
21/10/01 01:26:47 INFO TaskSetManager: Starting task 1.0 in stage 3.0 (TID 13) (192.168.1.4, executor driver, partition 1, PROCESS_LOCAL, 4536 bytes) taskResourceAssignments Map()
21/10/01 01:26:47 INFO Executor: Running task 1.0 in stage 3.0 (TID 13)
21/10/01 01:26:47 INFO TaskSetManager: Finished task 0.0 in stage 3.0 (TID 12) in 61 ms on 192.168.1.4 (executor driver) (1/4)
21/10/01 01:26:47 INFO HadoopRDD: Input split: file:/Users/dpq/springbootWrokspace/StocksAnalysisBySpark/resources/Stocks.csv:783+783
21/10/01 01:26:47 INFO Executor: Finished task 1.0 in stage 3.0 (TID 13). 1249 bytes result sent to driver
21/10/01 01:26:47 INFO TaskSetManager: Starting task 2.0 in stage 3.0 (TID 14) (192.168.1.4, executor driver, partition 2, PROCESS_LOCAL, 4536 bytes) taskResourceAssignments Map()
21/10/01 01:26:47 INFO TaskSetManager: Finished task 1.0 in stage 3.0 (TID 13) in 17 ms on 192.168.1.4 (executor driver) (2/4)
21/10/01 01:26:47 INFO Executor: Running task 2.0 in stage 3.0 (TID 14)
21/10/01 01:26:47 INFO HadoopRDD: Input split: file:/Users/dpq/springbootWrokspace/StocksAnalysisBySpark/resources/Stocks.csv:1566+783
21/10/01 01:26:47 INFO Executor: Finished task 2.0 in stage 3.0 (TID 14). 1249 bytes result sent to driver
21/10/01 01:26:47 INFO TaskSetManager: Starting task 3.0 in stage 3.0 (TID 15) (192.168.1.4, executor driver, partition 3, PROCESS_LOCAL, 4536 bytes) taskResourceAssignments Map()
21/10/01 01:26:47 INFO TaskSetManager: Finished task 2.0 in stage 3.0 (TID 14) in 21 ms on 192.168.1.4 (executor driver) (3/4)
21/10/01 01:26:47 INFO Executor: Running task 3.0 in stage 3.0 (TID 15)
21/10/01 01:26:47 INFO HadoopRDD: Input split: file:/Users/dpq/springbootWrokspace/StocksAnalysisBySpark/resources/Stocks.csv:2349+783
21/10/01 01:26:47 INFO Executor: Finished task 3.0 in stage 3.0 (TID 15). 1249 bytes result sent to driver
21/10/01 01:26:47 INFO TaskSetManager: Finished task 3.0 in stage 3.0 (TID 15) in 19 ms on 192.168.1.4 (executor driver) (4/4)
21/10/01 01:26:47 INFO TaskSchedulerImpl: Removed TaskSet 3.0, whose tasks have all completed, from pool
21/10/01 01:26:47 INFO DAGScheduler: ShuffleMapStage 3 (flatMapToPair at StockAnalysis.java:33) finished in 0.138 s
21/10/01 01:26:47 INFO DAGScheduler: looking for newly runnable stages
21/10/01 01:26:47 INFO DAGScheduler: running: Set()
21/10/01 01:26:47 INFO DAGScheduler: waiting: Set(ResultStage 4)
21/10/01 01:26:47 INFO DAGScheduler: failed: Set()
21/10/01 01:26:47 INFO DAGScheduler: Submitting ResultStage 4 (MapPartitionsRDD[6] at glom at StockAnalysis.java:57), which has no missing parents
21/10/01 01:26:47 INFO MemoryStore: Block broadcast_5 stored as values in memory (estimated size 5.3 KiB, free 1048.6 MiB)
21/10/01 01:26:47 INFO MemoryStore: Block broadcast_5_piece0 stored as bytes in memory (estimated size 2.8 KiB, free 1048.6 MiB)
21/10/01 01:26:47 INFO BlockManagerInfo: Added broadcast_5_piece0 in memory on 192.168.1.4:49920 (size: 2.8 KiB, free: 1048.8 MiB)
21/10/01 01:26:47 INFO SparkContext: Created broadcast 5 from broadcast at DAGScheduler.scala:1388
21/10/01 01:26:47 INFO DAGScheduler: Submitting 4 missing tasks from ResultStage 4 (MapPartitionsRDD[6] at glom at StockAnalysis.java:57) (first 15 tasks are for partitions Vector(0, 1, 2, 3))
21/10/01 01:26:47 INFO TaskSchedulerImpl: Adding task set 4.0 with 4 tasks resource profile 0
21/10/01 01:26:47 INFO TaskSetManager: Starting task 0.0 in stage 4.0 (TID 16) (192.168.1.4, executor driver, partition 0, NODE_LOCAL, 4271 bytes) taskResourceAssignments Map()
21/10/01 01:26:47 INFO BlockManagerInfo: Removed broadcast_2_piece0 on 192.168.1.4:49920 in memory (size: 2.8 KiB, free: 1048.8 MiB)
21/10/01 01:26:47 INFO Executor: Running task 0.0 in stage 4.0 (TID 16)
21/10/01 01:26:47 INFO BlockManagerInfo: Removed broadcast_1_piece0 on 192.168.1.4:49920 in memory (size: 2.4 KiB, free: 1048.8 MiB)
21/10/01 01:26:47 INFO BlockManagerInfo: Removed broadcast_3_piece0 on 192.168.1.4:49920 in memory (size: 3.0 KiB, free: 1048.8 MiB)
21/10/01 01:26:47 INFO ShuffleBlockFetcherIterator: Getting 4 (624.0 B) non-empty blocks including 4 (624.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) remote blocks
21/10/01 01:26:47 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 6 ms
21/10/01 01:26:47 INFO Executor: Finished task 0.0 in stage 4.0 (TID 16). 3362 bytes result sent to driver
21/10/01 01:26:47 INFO TaskSetManager: Starting task 1.0 in stage 4.0 (TID 17) (192.168.1.4, executor driver, partition 1, NODE_LOCAL, 4271 bytes) taskResourceAssignments Map()
21/10/01 01:26:47 INFO Executor: Running task 1.0 in stage 4.0 (TID 17)
21/10/01 01:26:47 INFO TaskSetManager: Finished task 0.0 in stage 4.0 (TID 16) in 57 ms on 192.168.1.4 (executor driver) (1/4)
21/10/01 01:26:47 INFO ShuffleBlockFetcherIterator: Getting 4 (624.0 B) non-empty blocks including 4 (624.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) remote blocks
21/10/01 01:26:47 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
21/10/01 01:26:47 INFO Executor: Finished task 1.0 in stage 4.0 (TID 17). 3319 bytes result sent to driver
21/10/01 01:26:47 INFO TaskSetManager: Starting task 2.0 in stage 4.0 (TID 18) (192.168.1.4, executor driver, partition 2, NODE_LOCAL, 4271 bytes) taskResourceAssignments Map()
21/10/01 01:26:47 INFO Executor: Running task 2.0 in stage 4.0 (TID 18)
21/10/01 01:26:47 INFO TaskSetManager: Finished task 1.0 in stage 4.0 (TID 17) in 11 ms on 192.168.1.4 (executor driver) (2/4)
21/10/01 01:26:47 INFO ShuffleBlockFetcherIterator: Getting 4 (624.0 B) non-empty blocks including 4 (624.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) remote blocks
21/10/01 01:26:47 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
21/10/01 01:26:47 INFO Executor: Finished task 2.0 in stage 4.0 (TID 18). 3345 bytes result sent to driver
21/10/01 01:26:47 INFO TaskSetManager: Starting task 3.0 in stage 4.0 (TID 19) (192.168.1.4, executor driver, partition 3, NODE_LOCAL, 4271 bytes) taskResourceAssignments Map()
21/10/01 01:26:47 INFO Executor: Running task 3.0 in stage 4.0 (TID 19)
21/10/01 01:26:47 INFO TaskSetManager: Finished task 2.0 in stage 4.0 (TID 18) in 9 ms on 192.168.1.4 (executor driver) (3/4)
21/10/01 01:26:47 INFO ShuffleBlockFetcherIterator: Getting 4 (684.0 B) non-empty blocks including 4 (684.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) remote blocks
21/10/01 01:26:47 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 0 ms
21/10/01 01:26:47 INFO Executor: Finished task 3.0 in stage 4.0 (TID 19). 3345 bytes result sent to driver
21/10/01 01:26:47 INFO TaskSetManager: Finished task 3.0 in stage 4.0 (TID 19) in 12 ms on 192.168.1.4 (executor driver) (4/4)
21/10/01 01:26:47 INFO TaskSchedulerImpl: Removed TaskSet 4.0, whose tasks have all completed, from pool
21/10/01 01:26:47 INFO DAGScheduler: ResultStage 4 (collect at StockAnalysis.java:57) finished in 0.120 s
21/10/01 01:26:47 INFO DAGScheduler: Job 3 is finished. Cancelling potential speculative or zombie tasks for this job
21/10/01 01:26:47 INFO TaskSchedulerImpl: Killing all running tasks in stage 4: Stage finished
21/10/01 01:26:47 INFO DAGScheduler: Job 3 finished: collect at StockAnalysis.java:57, took 0.282463 s
OUTPUT:[[(B8J,6.85), (A7J,9.85)], [(B7J,7.85), (F7J,3.85)], [(B6J,4.55), (S7J,15.85), (G7J,11.85)], [(D7J,2.85), (H7J,12.85), (B9J,8.85)]]
21/10/01 01:26:47 INFO SparkContext: Invoking stop() from shutdown hook
21/10/01 01:26:47 INFO SparkUI: Stopped Spark web UI at http://192.168.1.4:4040
21/10/01 01:26:47 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
21/10/01 01:26:47 INFO MemoryStore: MemoryStore cleared
21/10/01 01:26:47 INFO BlockManager: BlockManager stopped
21/10/01 01:26:47 INFO BlockManagerMaster: BlockManagerMaster stopped
21/10/01 01:26:47 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
21/10/01 01:26:47 INFO SparkContext: Successfully stopped SparkContext
21/10/01 01:26:47 INFO ShutdownHookManager: Shutdown hook called
21/10/01 01:26:47 INFO ShutdownHookManager: Deleting directory /private/var/folders/r_/b4vyn5dj269_2h9rqh1gcdrr0000gn/T/spark-d028941e-75aa-4301-8d9a-e7b9db98f782

Popular posts from this blog

What is Garbage collection in Spark and its impact and resolution

How to change column name in Dataframe and selection of few columns in Dataframe using Pyspark with example

Window function in PySpark with Joins example using 2 Dataframes (inner join)