Errors in templates examples in Weka Distributed Spark

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

Errors in templates examples in Weka Distributed Spark

Mehdi Jahanirad
Dear Dr Mark Hall

Hope you are well, actually first of all thanks for your amazing online course for Weka Distributed Spark, I try to run the examples that you provided on the template, however, I received this error, I would be really appreciated if you can advice how to fix the errors. 


14:49:32: ERROR - Exception in task 1.0 in stage 3.0 (TID 9)
14:49:32: java.lang.NoSuchMethodError: sun.nio.ch.DirectBuffer.cleaner()Lsun/misc/Cleaner;
14:49:32: 
14:49:32:  at org.apache.spark.storage.BlockManager$.dispose(BlockManager.scala:1225)
14:49:32: 
14:49:32:  at org.apache.spark.util.ByteBufferInputStream.cleanUp(ByteBufferInputStream.scala:76)
14:49:32: 
14:49:32:  at org.apache.spark.util.ByteBufferInputStream.read(ByteBufferInputStream.scala:48)
14:49:32: 
14:49:32:  at org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:328)
14:49:32: 
14:49:32:  at org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:384)
14:49:32: 
14:49:32:  at java.base/java.io.ObjectInputStream$PeekInputStream.peek(ObjectInputStream.java:2723)
14:49:32: 
14:49:32:  at java.base/java.io.ObjectInputStream$BlockDataInputStream.peek(ObjectInputStream.java:3050)
14:49:32: 
14:49:32:  at java.base/java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:3060)
14:49:32: 
14:49:32:  at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1561)
14:49:32: 
14:49:32:  at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:430)
14:49:32: 
14:49:32:  at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)
14:49:32: 
14:49:32:  at org.apache.spark.serializer.DeserializationStream$$anon$1.getNext(Serializer.scala:133)
14:49:32: 
14:49:32:  at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:71)
14:49:32: 
14:49:32:  at org.apache.spark.storage.BlockManager$LazyProxyIterator$1.hasNext(BlockManager.scala:1171)
14:49:32: 
14:49:32:  at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
14:49:32: 
14:49:32:  at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:30)
14:49:32: 
14:49:32:  at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
14:49:32: 
14:49:32:  at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
14:49:32: 
14:49:32:  at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:229)
14:49:32: 
14:49:32:  at org.apache.spark.shuffle.hash.HashShuffleReader.read(HashShuffleReader.scala:63)
14:49:32: 
14:49:32:  at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:92)
14:49:32: 
14:49:32:  at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
14:49:32: 
14:49:32:  at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
14:49:32: 
14:49:32:  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
14:49:32: 
14:49:32:  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
14:49:32: 
14:49:32:  at org.apache.spark.scheduler.Task.run(Task.scala:54)
14:49:32: 
14:49:32:  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
14:49:32: 
14:49:32:  at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
14:49:32: 
14:49:32:  at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
14:49:32: 
14:49:32:  at java.base/java.lang.Thread.run(Thread.java:834)
14:49:32: 
14:49:32: ERROR - Exception in task 0.0 in stage 3.0 (TID 8)
14:49:32: java.lang.NoSuchMethodError: sun.nio.ch.DirectBuffer.cleaner()Lsun/misc/Cleaner;
14:49:32: 
14:49:32:  at org.apache.spark.storage.BlockManager$.dispose(BlockManager.scala:1225)
14:49:32: 
14:49:32:  at org.apache.spark.util.ByteBufferInputStream.cleanUp(ByteBufferInputStream.scala:76)
14:49:32: 
14:49:32:  at org.apache.spark.util.ByteBufferInputStream.read(ByteBufferInputStream.scala:48)
14:49:32: 
14:49:32:  at org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:328)
14:49:32: 
14:49:32:  at org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:384)
14:49:32: 
14:49:32:  at java.base/java.io.ObjectInputStream$PeekInputStream.peek(ObjectInputStream.java:2723)
14:49:32: 
14:49:32:  at java.base/java.io.ObjectInputStream$BlockDataInputStream.peek(ObjectInputStream.java:3050)
14:49:32: 
14:49:32:  at java.base/java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:3060)
14:49:32: 
14:49:32:  at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1561)
14:49:32: 
14:49:32:  at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:430)
14:49:32: 
14:49:32:  at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)
14:49:32: 
14:49:32:  at org.apache.spark.serializer.DeserializationStream$$anon$1.getNext(Serializer.scala:133)
14:49:32: 
14:49:32:  at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:71)
14:49:32: 
14:49:32:  at org.apache.spark.storage.BlockManager$LazyProxyIterator$1.hasNext(BlockManager.scala:1171)
14:49:32: 
14:49:32:  at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
14:49:32: 
14:49:32:  at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:30)
14:49:32: 
14:49:32:  at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
14:49:32: 
14:49:32:  at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
14:49:32: 
14:49:32:  at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:229)
14:49:32: 
14:49:32:  at org.apache.spark.shuffle.hash.HashShuffleReader.read(HashShuffleReader.scala:63)
14:49:32: 
14:49:32:  at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:92)
14:49:32: 
14:49:32:  at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
14:49:32: 
14:49:32:  at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
14:49:32: 
14:49:32:  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
14:49:32: 
14:49:32:  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
14:49:32: 
14:49:32:  at org.apache.spark.scheduler.Task.run(Task.scala:54)
14:49:32: 
14:49:32:  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
14:49:32: 
14:49:32:  at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
14:49:32: 
14:49:32:  at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
14:49:32: 
14:49:32:  at java.base/java.lang.Thread.run(Thread.java:834)
14:49:32: 
14:49:32: INFO - Removing broadcast 2
14:49:32: ERROR - Exception in task 2.0 in stage 3.0 (TID 10)
14:49:32: java.lang.NoSuchMethodError: sun.nio.ch.DirectBuffer.cleaner()Lsun/misc/Cleaner;
14:49:32: 
14:49:32:  at org.apache.spark.storage.BlockManager$.dispose(BlockManager.scala:1225)
14:49:32: 
14:49:32:  at org.apache.spark.util.ByteBufferInputStream.cleanUp(ByteBufferInputStream.scala:76)
14:49:32: 
14:49:32:  at org.apache.spark.util.ByteBufferInputStream.read(ByteBufferInputStream.scala:48)
14:49:32: 
14:49:32:  at org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:328)
14:49:32: 
14:49:32:  at org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:384)
14:49:32: 
14:49:32:  at java.base/java.io.ObjectInputStream$PeekInputStream.peek(ObjectInputStream.java:2723)
14:49:32: 
14:49:32:  at java.base/java.io.ObjectInputStream$BlockDataInputStream.peek(ObjectInputStream.java:3050)
14:49:32: 
14:49:32:  at java.base/java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:3060)
14:49:32: 
14:49:32:  at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1561)
14:49:32: 
14:49:32:  at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:430)
14:49:32: 
14:49:32:  at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)
14:49:32: 
14:49:32:  at org.apache.spark.serializer.DeserializationStream$$anon$1.getNext(Serializer.scala:133)
14:49:32: 
14:49:32:  at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:71)
14:49:32: 
14:49:32:  at org.apache.spark.storage.BlockManager$LazyProxyIterator$1.hasNext(BlockManager.scala:1171)
14:49:32: 
14:49:32:  at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
14:49:32: 
14:49:32:  at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:30)
14:49:32: 
14:49:32:  at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
14:49:32: 
14:49:32:  at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
14:49:32: 
14:49:32:  at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:229)
14:49:32: 
14:49:32:  at org.apache.spark.shuffle.hash.HashShuffleReader.read(HashShuffleReader.scala:63)
14:49:32: 
14:49:32:  at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:92)
14:49:32: 
14:49:32:  at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
14:49:32: 
14:49:32:  at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
14:49:32: 
14:49:32:  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
14:49:32: 
14:49:32:  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
14:49:32: 
14:49:32:  at org.apache.spark.scheduler.Task.run(Task.scala:54)
14:49:32: 
14:49:32:  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
14:49:32: 
14:49:32:  at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
14:49:32: 
14:49:32:  at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
14:49:32: 
14:49:32:  at java.base/java.lang.Thread.run(Thread.java:834)
14:49:32: 
14:49:32: ERROR - Uncaught exception in thread Thread[Executor task launch worker-0,5,main]
14:49:32: java.lang.NoSuchMethodError: sun.nio.ch.DirectBuffer.cleaner()Lsun/misc/Cleaner;
14:49:32: 
14:49:32:  at org.apache.spark.storage.BlockManager$.dispose(BlockManager.scala:1225)
14:49:32: 
14:49:32:  at org.apache.spark.util.ByteBufferInputStream.cleanUp(ByteBufferInputStream.scala:76)
14:49:32: 
14:49:32:  at org.apache.spark.util.ByteBufferInputStream.read(ByteBufferInputStream.scala:48)
14:49:32: 
14:49:32:  at org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:328)
14:49:32: 
14:49:32:  at org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:384)
14:49:32: 
14:49:32:  at java.base/java.io.ObjectInputStream$PeekInputStream.peek(ObjectInputStream.java:2723)
14:49:32: 
14:49:32:  at java.base/java.io.ObjectInputStream$BlockDataInputStream.peek(ObjectInputStream.java:3050)
14:49:32: 
14:49:32:  at java.base/java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:3060)
14:49:32: 
14:49:32:  at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1561)
14:49:32: 
14:49:32:  at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:430)
14:49:32: 
14:49:32:  at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)
14:49:32: 
14:49:32:  at org.apache.spark.serializer.DeserializationStream$$anon$1.getNext(Serializer.scala:133)
14:49:32: 
14:49:32:  at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:71)
14:49:32: 
14:49:32:  at org.apache.spark.storage.BlockManager$LazyProxyIterator$1.hasNext(BlockManager.scala:1171)
14:49:32: 
14:49:32:  at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
14:49:32: 
14:49:32:  at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:30)
14:49:32: 
14:49:32:  at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
14:49:32: 
14:49:32:  at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
14:49:32: 
14:49:32:  at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:229)
14:49:32: 
14:49:32:  at org.apache.spark.shuffle.hash.HashShuffleReader.read(HashShuffleReader.scala:63)
14:49:32: 
14:49:32:  at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:92)
14:49:32: 
14:49:32:  at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
14:49:32: 
14:49:32:  at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
14:49:32: 
14:49:32:  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
14:49:32: 
14:49:32:  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
14:49:32: 
14:49:32:  at org.apache.spark.scheduler.Task.run(Task.scala:54)
14:49:32: 
14:49:32:  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
14:49:32: 
14:49:32:  at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
14:49:32: 
14:49:32:  at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
14:49:32: 
14:49:32:  at java.base/java.lang.Thread.run(Thread.java:834)

14:49:32: ERROR - Uncaught exception in thread Thread[Executor task launch worker-2,5,main]
14:49:32: java.lang.NoSuchMethodError: sun.nio.ch.DirectBuffer.cleaner()Lsun/misc/Cleaner;
14:49:32: 
14:49:32:  at org.apache.spark.storage.BlockManager$.dispose(BlockManager.scala:1225)
14:49:32: 
14:49:32:  at org.apache.spark.util.ByteBufferInputStream.cleanUp(ByteBufferInputStream.scala:76)
14:49:32: 
14:49:32:  at org.apache.spark.util.ByteBufferInputStream.read(ByteBufferInputStream.scala:48)
14:49:32: 
14:49:32:  at org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:328)
14:49:32: 
14:49:32:  at org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:384)
14:49:32: 
14:49:32:  at java.base/java.io.ObjectInputStream$PeekInputStream.peek(ObjectInputStream.java:2723)
14:49:32: 
14:49:32:  at java.base/java.io.ObjectInputStream$BlockDataInputStream.peek(ObjectInputStream.java:3050)
14:49:32: 
14:49:32:  at java.base/java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:3060)
14:49:32: 
14:49:32:  at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1561)
14:49:32: 
14:49:32:  at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:430)
14:49:32: 
14:49:32:  at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)
14:49:32: 
14:49:32:  at org.apache.spark.serializer.DeserializationStream$$anon$1.getNext(Serializer.scala:133)
14:49:32: 
14:49:32:  at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:71)
14:49:32: 
14:49:32:  at org.apache.spark.storage.BlockManager$LazyProxyIterator$1.hasNext(BlockManager.scala:1171)
14:49:32: 
14:49:32:  at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
14:49:32: 
14:49:32:  at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:30)
14:49:32: 
14:49:32:  at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
14:49:32: 
14:49:32:  at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
14:49:32: 
14:49:32:  at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:229)
14:49:32: 
14:49:32:  at org.apache.spark.shuffle.hash.HashShuffleReader.read(HashShuffleReader.scala:63)
14:49:32: 
14:49:32:  at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:92)
14:49:32: 
14:49:32:  at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
14:49:32: 
14:49:32:  at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
14:49:32: 
14:49:32:  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
14:49:32: 
14:49:32:  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
14:49:32: 
14:49:32:  at org.apache.spark.scheduler.Task.run(Task.scala:54)
14:49:32: 
14:49:32:  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
14:49:32: 
14:49:32:  at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
14:49:32: 
14:49:32:  at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
14:49:32: 
14:49:32:  at java.base/java.lang.Thread.run(Thread.java:834)
14:49:32: 
14:49:32: INFO - Block broadcast_2 of size 13920 dropped from memory (free 14152894222)
14:49:32: ERROR - Task 1 in stage 3.0 failed 1 times; aborting job
14:49:32: ERROR - Uncaught exception in thread Thread[Executor task launch worker-1,5,main]
14:49:32: java.lang.NoSuchMethodError: sun.nio.ch.DirectBuffer.cleaner()Lsun/misc/Cleaner;
14:49:32: 
14:49:32:  at org.apache.spark.storage.BlockManager$.dispose(BlockManager.scala:1225)
14:49:32: 
14:49:32:  at org.apache.spark.util.ByteBufferInputStream.cleanUp(ByteBufferInputStream.scala:76)
14:49:32: 
14:49:32:  at org.apache.spark.util.ByteBufferInputStream.read(ByteBufferInputStream.scala:48)
14:49:32: 
14:49:32:  at org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:328)
14:49:32: 
14:49:32:  at org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:384)
14:49:32: 
14:49:32:  at java.base/java.io.ObjectInputStream$PeekInputStream.peek(ObjectInputStream.java:2723)
14:49:32: 
14:49:32:  at java.base/java.io.ObjectInputStream$BlockDataInputStream.peek(ObjectInputStream.java:3050)
14:49:32: 
14:49:32:  at java.base/java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:3060)
14:49:32: 
14:49:32:  at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1561)
14:49:32: 
14:49:32:  at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:430)
14:49:32: 
14:49:32:  at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)
14:49:32: 
14:49:32:  at org.apache.spark.serializer.DeserializationStream$$anon$1.getNext(Serializer.scala:133)
14:49:32: 
14:49:32:  at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:71)
14:49:32: 
14:49:32:  at org.apache.spark.storage.BlockManager$LazyProxyIterator$1.hasNext(BlockManager.scala:1171)
14:49:32: 
14:49:32:  at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
14:49:32: 
14:49:32:  at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:30)
14:49:32: 
14:49:32:  at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
14:49:32: 
14:49:32:  at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
14:49:32: 
14:49:32:  at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:229)
14:49:32: 
14:49:32:  at org.apache.spark.shuffle.hash.HashShuffleReader.read(HashShuffleReader.scala:63)
14:49:32: 
14:49:32:  at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:92)
14:49:32: 
14:49:32:  at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
14:49:32: 
14:49:32:  at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
14:49:32: 
14:49:32:  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
14:49:32: 
14:49:32:  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
14:49:32: 
14:49:32:  at org.apache.spark.scheduler.Task.run(Task.scala:54)
14:49:32: 
14:49:32:  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
14:49:32: 
14:49:32:  at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
14:49:32: 
14:49:32:  at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
14:49:32: 
14:49:32:  at java.base/java.lang.Thread.run(Thread.java:834)
14:49:32: 


14:49:33: [ERROR] RandomlyShuffleDataSparkJob$851737944|Job aborted due to stage failure: Task 1 in stage 3.0 failed 1 times, most recent failure: Lost task 1.0 in stage 3.0 (TID 9, localhost): java.lang.NoSuchMethodError: sun.nio.ch.DirectBuffer.cleaner()Lsun/misc/Cleaner;
        org.apache.spark.storage.BlockManager$.dispose(BlockManager.scala:1225)
        org.apache.spark.util.ByteBufferInputStream.cleanUp(ByteBufferInputStream.scala:76)
        org.apache.spark.util.ByteBufferInputStream.read(ByteBufferInputStream.scala:48)
        org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:328)
        org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:384)
        java.base/java.io.ObjectInputStream$PeekInputStream.peek(ObjectInputStream.java:2723)
        java.base/java.io.ObjectInputStream$BlockDataInputStream.peek(ObjectInputStream.java:3050)
        java.base/java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:3060)
        java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1561)
        java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:430)
        org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)
        org.apache.spark.serializer.DeserializationStream$$anon$1.getNext(Serializer.scala:133)
        org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:71)
        org.apache.spark.storage.BlockManager$LazyProxyIterator$1.hasNext(BlockManager.scala:1171)
        scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
        org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:30)
        org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
        scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
        org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:229)
        org.apache.spark.shuffle.hash.HashShuffleReader.read(HashShuffleReader.scala:63)
        org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:92)
        org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
        org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
        org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
        org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
        org.apache.spark.scheduler.Task.run(Task.scala:54)
        org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
        java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
        java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
        java.base/java.lang.Thread.run(Thread.java:834)
Driver stacktrace:
weka.core.WekaException: Job aborted due to stage failure: Task 1 in stage 3.0 failed 1 times, most recent failure: Lost task 1.0 in stage 3.0 (TID 9, localhost): java.lang.NoSuchMethodError: sun.nio.ch.DirectBuffer.cleaner()Lsun/misc/Cleaner;
        org.apache.spark.storage.BlockManager$.dispose(BlockManager.scala:1225)
        org.apache.spark.util.ByteBufferInputStream.cleanUp(ByteBufferInputStream.scala:76)
        org.apache.spark.util.ByteBufferInputStream.read(ByteBufferInputStream.scala:48)
        org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:328)
        org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:384)
        java.base/java.io.ObjectInputStream$PeekInputStream.peek(ObjectInputStream.java:2723)
        java.base/java.io.ObjectInputStream$BlockDataInputStream.peek(ObjectInputStream.java:3050)
        java.base/java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:3060)
        java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1561)
        java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:430)
        org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)
        org.apache.spark.serializer.DeserializationStream$$anon$1.getNext(Serializer.scala:133)
        org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:71)
        org.apache.spark.storage.BlockManager$LazyProxyIterator$1.hasNext(BlockManager.scala:1171)
        scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
        org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:30)
        org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
        scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
        org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:229)
        org.apache.spark.shuffle.hash.HashShuffleReader.read(HashShuffleReader.scala:63)
        org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:92)
        org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
        org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
        org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
        org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
        org.apache.spark.scheduler.Task.run(Task.scala:54)
        org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
        java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
        java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
        java.base/java.lang.Thread.run(Thread.java:834)
Driver stacktrace:
at weka.knowledgeflow.steps.AbstractSparkJob.runJob(AbstractSparkJob.java:294)
at weka.knowledgeflow.steps.AbstractSparkJob.processIncoming(AbstractSparkJob.java:232)
at weka.knowledgeflow.StepManagerImpl.processIncoming(StepManagerImpl.java:1060)
at weka.knowledgeflow.BaseExecutionEnvironment$6.run(BaseExecutionEnvironment.java:493)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 3.0 failed 1 times, most recent failure: Lost task 1.0 in stage 3.0 (TID 9, localhost): java.lang.NoSuchMethodError: sun.nio.ch.DirectBuffer.cleaner()Lsun/misc/Cleaner;
        org.apache.spark.storage.BlockManager$.dispose(BlockManager.scala:1225)
        org.apache.spark.util.ByteBufferInputStream.cleanUp(ByteBufferInputStream.scala:76)
        org.apache.spark.util.ByteBufferInputStream.read(ByteBufferInputStream.scala:48)
        org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:328)
        org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:384)
        java.base/java.io.ObjectInputStream$PeekInputStream.peek(ObjectInputStream.java:2723)
        java.base/java.io.ObjectInputStream$BlockDataInputStream.peek(ObjectInputStream.java:3050)
        java.base/java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:3060)
        java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1561)
        java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:430)
        org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)
        org.apache.spark.serializer.DeserializationStream$$anon$1.getNext(Serializer.scala:133)
        org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:71)
        org.apache.spark.storage.BlockManager$LazyProxyIterator$1.hasNext(BlockManager.scala:1171)
        scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
        org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:30)
        org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
        scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
        org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:229)
        org.apache.spark.shuffle.hash.HashShuffleReader.read(HashShuffleReader.scala:63)
        org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:92)
        org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
        org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
        org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
        org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
        org.apache.spark.scheduler.Task.run(Task.scala:54)
        org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
        java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
        java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
        java.base/java.lang.Thread.run(Thread.java:834)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1185)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1174)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1173)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1173)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:688)
at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1391)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
at akka.actor.ActorCell.invoke(ActorCell.scala:456)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
at akka.dispatch.Mailbox.run(Mailbox.scala:219)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

14:49:33: [Low] ArffHeaderSparkJob$512641084|Interrupted
14:49:33: [Low] TextViewer$1290580845|Interrupted
14:49:33: [Low] KMeansClustererSparkJob$178074647|Interrupted
14:49:33: [Low] RandomlyShuffleDataSparkJob$851737944|Interrupted

**********In below also you can see relevant info which is required******************
  1. Which OS are you using?
I'm user for  Windows 10 Enterprise.

        2.  Which version of Weka are you using?
I'm using Weka 3.8.3 Stable version installed on my computer.

        3. Which version of java are you using? 
I have installed jdk-11.0.1 & jre1.8.0_191 on my computer.

        4. Is this the distributedWekaSpark package or the distributedWekaSparkDev package?
This is distributedWekaSpark package which I have downloaded and installed from Weka Package manager. I don't have distributedWekaSparkDev on my computer. 

        5. Do you have anything set in your CLASSPATH environment variable?
C:\Program Files\Java\jre1.8.0_191\bin; C:\Program Files\Java\jdk-11.0.1\bin; C:\Program Files\Weka-3-8\weka.jar; which are set under the Environment Variables, under System variables, I didn't set any CLASSPATH under User Variables on my local account, by the way, I have admin domain and I have admin permission as well.
       6. What other Weka packages do you have installed? The AutoWeka package is known to cause problems.

1.  Auto-WEKA  Classification, Regression, Attribute Selection     2.6.1 2.6.1 Yes

2.  classificationViaClustering Classification  1.0.8 1.0.8 Yes

3.  classifierBasedAttributeSelection     Attribute selection     1.0.5 1.0.5 Yes

4.  distributedWekaBase   Distributed     1.0.17     1.0.17     Yes

5.  distributedWekaSpark  Distributed     1.0.9 1.0.9 Yes

6.  gridSearch Classification  1.0.12     1.0.12     Yes

7.  LibLINEAR  Classification  1.9.8 1.9.8 Yes

8.  LibSVM     Classification, Regression 1.0.10     1.0.10     Yes

9.  massiveOnlineAnalysis Data streams    2018.06.0  2018.06.0     Yes

10.         multilayerPerceptronCS     Classification  1.0.1     1.0.1 Yes

11.         multiLayerPerceptrons Classification/regression, Preprocessing   1.0.10     1.0.10     Yes

12.         multisearch     Classification  2017.3.28  2017.3.28     Yes

13.         naiveBayesTree  Classification  1.0.4 1.0.4 Yes

14.         niftiLoader     Converter  1.0.1 1.0.1 Yes

15.         normalize  Preprocessing   1.0.1 1.0.1 Yes

16.         oneClassClassifier    Classification  1.0.4 1.0.4 Yes

17.         RBFNetwork Classification/regression  1.0.8 1.0.8 Yes

18.         rotationForest  Ensemble learning     1.0.2 1.0.2 Yes

19.         RPlugin    R integration   1.2.27     1.2.27     Yes

20.         scatterPlot3D   Visualization   1.0.7 1.0.7 Yes

21.         StudentFilters  Preprocessing   2.0.0 2.0.0 Yes

22.         supervisedAttributeScaling Preprocessing   1.0.2     1.0.2 Yes

23.         thresholdSelector     Classification  1.0.3 1.0.3 Yes

24.         timeseriesForecasting Time series     1.0.25     1.0.25     Yes

25.         userClassifier  Classification/regression  1.0.3     1.0.3 Yes

26.         wekaDeeplearning4j    Classification/Regression     1.5.12     1.5.12     Yes

27.         WekaExcel  Converter  1.0.7 1.0.7 Yes

28.         WekaODF    Converter  1.0.4 1.0.4 Yes

By the way, I think I'm using Java 1.8 as well, I hope now the aforementioned info can help in this situation. Once again I'm really appreciated for your help and consideration. I will look forward to hearing your valuable response to this matter.



I will look forward to hearing from you about this matter.

Thanks in advance

With best regards 

Mehdi Jahanirad

BIT (Hons) (MMU), MCompSc (Malaya), PhD (Malaya)



_______________________________________________
Wekalist mailing list
Send posts to: [hidden email]
To subscribe, unsubscribe, etc., visit https://list.waikato.ac.nz/mailman/listinfo/wekalist
List etiquette: http://www.cs.waikato.ac.nz/~ml/weka/mailinglist_etiquette.html
Reply | Threaded
Open this post in threaded view
|

Re: Errors in templates examples in Weka Distributed Spark

Mark Hall

Hi Mehdi,

 

As I said in our private email exchange, this looks to be a problem with the Spark libraries and Java >= 1.9. It is possible that some process launched by Spark is using your Java 1.11 installation somehow. Perhaps try uninstalling Java 1.11 entirely from your computer.

 

Cheers,

Mark.

 

On 11/03/19, 8:41 PM, "[hidden email] on behalf of Mehdi Jahanirad" <[hidden email] on behalf of [hidden email]> wrote:

 

Dear Dr Mark Hall

 

Hope you are well, actually first of all thanks for your amazing online course for Weka Distributed Spark, I try to run the examples that you provided on the template, however, I received this error, I would be really appreciated if you can advice how to fix the errors. 

 

 

14:49:32: ERROR - Exception in task 1.0 in stage 3.0 (TID 9)

14:49:32: java.lang.NoSuchMethodError: sun.nio.ch.DirectBuffer.cleaner()Lsun/misc/Cleaner;

14:49:32: 

14:49:32:  at org.apache.spark.storage.BlockManager$.dispose(BlockManager.scala:1225)

14:49:32: 

14:49:32:  at org.apache.spark.util.ByteBufferInputStream.cleanUp(ByteBufferInputStream.scala:76)

14:49:32: 

14:49:32:  at org.apache.spark.util.ByteBufferInputStream.read(ByteBufferInputStream.scala:48)

14:49:32: 

14:49:32:  at org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:328)

14:49:32: 

14:49:32:  at org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:384)

14:49:32: 

14:49:32:  at java.base/java.io.ObjectInputStream$PeekInputStream.peek(ObjectInputStream.java:2723)

14:49:32: 

14:49:32:  at java.base/java.io.ObjectInputStream$BlockDataInputStream.peek(ObjectInputStream.java:3050)

14:49:32: 

14:49:32:  at java.base/java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:3060)

14:49:32: 

14:49:32:  at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1561)

14:49:32: 

14:49:32:  at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:430)

14:49:32: 

14:49:32:  at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)

14:49:32: 

14:49:32:  at org.apache.spark.serializer.DeserializationStream$$anon$1.getNext(Serializer.scala:133)

14:49:32: 

14:49:32:  at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:71)

14:49:32: 

14:49:32:  at org.apache.spark.storage.BlockManager$LazyProxyIterator$1.hasNext(BlockManager.scala:1171)

14:49:32: 

14:49:32:  at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)

14:49:32: 

14:49:32:  at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:30)

14:49:32: 

14:49:32:  at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)

14:49:32: 

14:49:32:  at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)

14:49:32: 

14:49:32:  at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:229)

14:49:32: 

14:49:32:  at org.apache.spark.shuffle.hash.HashShuffleReader.read(HashShuffleReader.scala:63)

14:49:32: 

14:49:32:  at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:92)

14:49:32: 

14:49:32:  at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)

14:49:32: 

14:49:32:  at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)

14:49:32: 

14:49:32:  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)

14:49:32: 

14:49:32:  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)

14:49:32: 

14:49:32:  at org.apache.spark.scheduler.Task.run(Task.scala:54)

14:49:32: 

14:49:32:  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)

14:49:32: 

14:49:32:  at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)

14:49:32: 

14:49:32:  at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)

14:49:32: 

14:49:32:  at java.base/java.lang.Thread.run(Thread.java:834)

14:49:32: 

14:49:32: ERROR - Exception in task 0.0 in stage 3.0 (TID 8)

14:49:32: java.lang.NoSuchMethodError: sun.nio.ch.DirectBuffer.cleaner()Lsun/misc/Cleaner;

14:49:32: 

14:49:32:  at org.apache.spark.storage.BlockManager$.dispose(BlockManager.scala:1225)

14:49:32: 

14:49:32:  at org.apache.spark.util.ByteBufferInputStream.cleanUp(ByteBufferInputStream.scala:76)

14:49:32: 

14:49:32:  at org.apache.spark.util.ByteBufferInputStream.read(ByteBufferInputStream.scala:48)

14:49:32: 

14:49:32:  at org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:328)

14:49:32: 

14:49:32:  at org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:384)

14:49:32: 

14:49:32:  at java.base/java.io.ObjectInputStream$PeekInputStream.peek(ObjectInputStream.java:2723)

14:49:32: 

14:49:32:  at java.base/java.io.ObjectInputStream$BlockDataInputStream.peek(ObjectInputStream.java:3050)

14:49:32: 

14:49:32:  at java.base/java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:3060)

14:49:32: 

14:49:32:  at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1561)

14:49:32: 

14:49:32:  at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:430)

14:49:32: 

14:49:32:  at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)

14:49:32: 

14:49:32:  at org.apache.spark.serializer.DeserializationStream$$anon$1.getNext(Serializer.scala:133)

14:49:32: 

14:49:32:  at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:71)

14:49:32: 

14:49:32:  at org.apache.spark.storage.BlockManager$LazyProxyIterator$1.hasNext(BlockManager.scala:1171)

14:49:32: 

14:49:32:  at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)

14:49:32: 

14:49:32:  at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:30)

14:49:32: 

14:49:32:  at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)

14:49:32: 

14:49:32:  at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)

14:49:32: 

14:49:32:  at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:229)

14:49:32: 

14:49:32:  at org.apache.spark.shuffle.hash.HashShuffleReader.read(HashShuffleReader.scala:63)

14:49:32: 

14:49:32:  at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:92)

14:49:32: 

14:49:32:  at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)

14:49:32: 

14:49:32:  at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)

14:49:32: 

14:49:32:  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)

14:49:32: 

14:49:32:  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)

14:49:32: 

14:49:32:  at org.apache.spark.scheduler.Task.run(Task.scala:54)

14:49:32: 

14:49:32:  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)

14:49:32: 

14:49:32:  at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)

14:49:32: 

14:49:32:  at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)

14:49:32: 

14:49:32:  at java.base/java.lang.Thread.run(Thread.java:834)

14:49:32: 

14:49:32: INFO - Removing broadcast 2

14:49:32: ERROR - Exception in task 2.0 in stage 3.0 (TID 10)

14:49:32: java.lang.NoSuchMethodError: sun.nio.ch.DirectBuffer.cleaner()Lsun/misc/Cleaner;

14:49:32: 

14:49:32:  at org.apache.spark.storage.BlockManager$.dispose(BlockManager.scala:1225)

14:49:32: 

14:49:32:  at org.apache.spark.util.ByteBufferInputStream.cleanUp(ByteBufferInputStream.scala:76)

14:49:32: 

14:49:32:  at org.apache.spark.util.ByteBufferInputStream.read(ByteBufferInputStream.scala:48)

14:49:32: 

14:49:32:  at org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:328)

14:49:32: 

14:49:32:  at org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:384)

14:49:32: 

14:49:32:  at java.base/java.io.ObjectInputStream$PeekInputStream.peek(ObjectInputStream.java:2723)

14:49:32: 

14:49:32:  at java.base/java.io.ObjectInputStream$BlockDataInputStream.peek(ObjectInputStream.java:3050)

14:49:32: 

14:49:32:  at java.base/java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:3060)

14:49:32: 

14:49:32:  at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1561)

14:49:32: 

14:49:32:  at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:430)

14:49:32: 

14:49:32:  at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)

14:49:32: 

14:49:32:  at org.apache.spark.serializer.DeserializationStream$$anon$1.getNext(Serializer.scala:133)

14:49:32: 

14:49:32:  at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:71)

14:49:32: 

14:49:32:  at org.apache.spark.storage.BlockManager$LazyProxyIterator$1.hasNext(BlockManager.scala:1171)

14:49:32: 

14:49:32:  at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)

14:49:32: 

14:49:32:  at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:30)

14:49:32: 

14:49:32:  at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)

14:49:32: 

14:49:32:  at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)

14:49:32: 

14:49:32:  at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:229)

14:49:32: 

14:49:32:  at org.apache.spark.shuffle.hash.HashShuffleReader.read(HashShuffleReader.scala:63)

14:49:32: 

14:49:32:  at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:92)

14:49:32: 

14:49:32:  at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)

14:49:32: 

14:49:32:  at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)

14:49:32: 

14:49:32:  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)

14:49:32: 

14:49:32:  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)

14:49:32: 

14:49:32:  at org.apache.spark.scheduler.Task.run(Task.scala:54)

14:49:32: 

14:49:32:  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)

14:49:32: 

14:49:32:  at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)

14:49:32: 

14:49:32:  at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)

14:49:32: 

14:49:32:  at java.base/java.lang.Thread.run(Thread.java:834)

14:49:32: 

14:49:32: ERROR - Uncaught exception in thread Thread[Executor task launch worker-0,5,main]

14:49:32: java.lang.NoSuchMethodError: sun.nio.ch.DirectBuffer.cleaner()Lsun/misc/Cleaner;

14:49:32: 

14:49:32:  at org.apache.spark.storage.BlockManager$.dispose(BlockManager.scala:1225)

14:49:32: 

14:49:32:  at org.apache.spark.util.ByteBufferInputStream.cleanUp(ByteBufferInputStream.scala:76)

14:49:32: 

14:49:32:  at org.apache.spark.util.ByteBufferInputStream.read(ByteBufferInputStream.scala:48)

14:49:32: 

14:49:32:  at org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:328)

14:49:32: 

14:49:32:  at org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:384)

14:49:32: 

14:49:32:  at java.base/java.io.ObjectInputStream$PeekInputStream.peek(ObjectInputStream.java:2723)

14:49:32: 

14:49:32:  at java.base/java.io.ObjectInputStream$BlockDataInputStream.peek(ObjectInputStream.java:3050)

14:49:32: 

14:49:32:  at java.base/java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:3060)

14:49:32: 

14:49:32:  at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1561)

14:49:32: 

14:49:32:  at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:430)

14:49:32: 

14:49:32:  at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)

14:49:32: 

14:49:32:  at org.apache.spark.serializer.DeserializationStream$$anon$1.getNext(Serializer.scala:133)

14:49:32: 

14:49:32:  at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:71)

14:49:32: 

14:49:32:  at org.apache.spark.storage.BlockManager$LazyProxyIterator$1.hasNext(BlockManager.scala:1171)

14:49:32: 

14:49:32:  at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)

14:49:32: 

14:49:32:  at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:30)

14:49:32: 

14:49:32:  at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)

14:49:32: 

14:49:32:  at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)

14:49:32: 

14:49:32:  at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:229)

14:49:32: 

14:49:32:  at org.apache.spark.shuffle.hash.HashShuffleReader.read(HashShuffleReader.scala:63)

14:49:32: 

14:49:32:  at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:92)

14:49:32: 

14:49:32:  at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)

14:49:32: 

14:49:32:  at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)

14:49:32: 

14:49:32:  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)

14:49:32: 

14:49:32:  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)

14:49:32: 

14:49:32:  at org.apache.spark.scheduler.Task.run(Task.scala:54)

14:49:32: 

14:49:32:  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)

14:49:32: 

14:49:32:  at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)

14:49:32: 

14:49:32:  at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)

14:49:32: 

14:49:32:  at java.base/java.lang.Thread.run(Thread.java:834)

 

14:49:32: ERROR - Uncaught exception in thread Thread[Executor task launch worker-2,5,main]

14:49:32: java.lang.NoSuchMethodError: sun.nio.ch.DirectBuffer.cleaner()Lsun/misc/Cleaner;

14:49:32: 

14:49:32:  at org.apache.spark.storage.BlockManager$.dispose(BlockManager.scala:1225)

14:49:32: 

14:49:32:  at org.apache.spark.util.ByteBufferInputStream.cleanUp(ByteBufferInputStream.scala:76)

14:49:32: 

14:49:32:  at org.apache.spark.util.ByteBufferInputStream.read(ByteBufferInputStream.scala:48)

14:49:32: 

14:49:32:  at org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:328)

14:49:32: 

14:49:32:  at org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:384)

14:49:32: 

14:49:32:  at java.base/java.io.ObjectInputStream$PeekInputStream.peek(ObjectInputStream.java:2723)

14:49:32: 

14:49:32:  at java.base/java.io.ObjectInputStream$BlockDataInputStream.peek(ObjectInputStream.java:3050)

14:49:32: 

14:49:32:  at java.base/java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:3060)

14:49:32: 

14:49:32:  at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1561)

14:49:32: 

14:49:32:  at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:430)

14:49:32: 

14:49:32:  at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)

14:49:32: 

14:49:32:  at org.apache.spark.serializer.DeserializationStream$$anon$1.getNext(Serializer.scala:133)

14:49:32: 

14:49:32:  at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:71)

14:49:32: 

14:49:32:  at org.apache.spark.storage.BlockManager$LazyProxyIterator$1.hasNext(BlockManager.scala:1171)

14:49:32: 

14:49:32:  at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)

14:49:32: 

14:49:32:  at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:30)

14:49:32: 

14:49:32:  at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)

14:49:32: 

14:49:32:  at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)

14:49:32: 

14:49:32:  at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:229)

14:49:32: 

14:49:32:  at org.apache.spark.shuffle.hash.HashShuffleReader.read(HashShuffleReader.scala:63)

14:49:32: 

14:49:32:  at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:92)

14:49:32: 

14:49:32:  at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)

14:49:32: 

14:49:32:  at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)

14:49:32: 

14:49:32:  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)

14:49:32: 

14:49:32:  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)

14:49:32: 

14:49:32:  at org.apache.spark.scheduler.Task.run(Task.scala:54)

14:49:32: 

14:49:32:  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)

14:49:32: 

14:49:32:  at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)

14:49:32: 

14:49:32:  at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)

14:49:32: 

14:49:32:  at java.base/java.lang.Thread.run(Thread.java:834)

14:49:32: 

14:49:32: INFO - Block broadcast_2 of size 13920 dropped from memory (free 14152894222)

14:49:32: ERROR - Task 1 in stage 3.0 failed 1 times; aborting job

14:49:32: ERROR - Uncaught exception in thread Thread[Executor task launch worker-1,5,main]

14:49:32: java.lang.NoSuchMethodError: sun.nio.ch.DirectBuffer.cleaner()Lsun/misc/Cleaner;

14:49:32: 

14:49:32:  at org.apache.spark.storage.BlockManager$.dispose(BlockManager.scala:1225)

14:49:32: 

14:49:32:  at org.apache.spark.util.ByteBufferInputStream.cleanUp(ByteBufferInputStream.scala:76)

14:49:32: 

14:49:32:  at org.apache.spark.util.ByteBufferInputStream.read(ByteBufferInputStream.scala:48)

14:49:32: 

14:49:32:  at org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:328)

14:49:32: 

14:49:32:  at org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:384)

14:49:32: 

14:49:32:  at java.base/java.io.ObjectInputStream$PeekInputStream.peek(ObjectInputStream.java:2723)

14:49:32: 

14:49:32:  at java.base/java.io.ObjectInputStream$BlockDataInputStream.peek(ObjectInputStream.java:3050)

14:49:32: 

14:49:32:  at java.base/java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:3060)

14:49:32: 

14:49:32:  at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1561)

14:49:32: 

14:49:32:  at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:430)

14:49:32: 

14:49:32:  at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)

14:49:32: 

14:49:32:  at org.apache.spark.serializer.DeserializationStream$$anon$1.getNext(Serializer.scala:133)

14:49:32: 

14:49:32:  at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:71)

14:49:32: 

14:49:32:  at org.apache.spark.storage.BlockManager$LazyProxyIterator$1.hasNext(BlockManager.scala:1171)

14:49:32: 

14:49:32:  at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)

14:49:32: 

14:49:32:  at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:30)

14:49:32: 

14:49:32:  at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)

14:49:32: 

14:49:32:  at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)

14:49:32: 

14:49:32:  at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:229)

14:49:32: 

14:49:32:  at org.apache.spark.shuffle.hash.HashShuffleReader.read(HashShuffleReader.scala:63)

14:49:32: 

14:49:32:  at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:92)

14:49:32: 

14:49:32:  at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)

14:49:32: 

14:49:32:  at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)

14:49:32: 

14:49:32:  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)

14:49:32: 

14:49:32:  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)

14:49:32: 

14:49:32:  at org.apache.spark.scheduler.Task.run(Task.scala:54)

14:49:32: 

14:49:32:  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)

14:49:32: 

14:49:32:  at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)

14:49:32: 

14:49:32:  at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)

14:49:32: 

14:49:32:  at java.base/java.lang.Thread.run(Thread.java:834)

14:49:32: 

 

 

14:49:33: [ERROR] RandomlyShuffleDataSparkJob$851737944|Job aborted due to stage failure: Task 1 in stage 3.0 failed 1 times, most recent failure: Lost task 1.0 in stage 3.0 (TID 9, localhost): java.lang.NoSuchMethodError: sun.nio.ch.DirectBuffer.cleaner()Lsun/misc/Cleaner;

        org.apache.spark.storage.BlockManager$.dispose(BlockManager.scala:1225)

        org.apache.spark.util.ByteBufferInputStream.cleanUp(ByteBufferInputStream.scala:76)

        org.apache.spark.util.ByteBufferInputStream.read(ByteBufferInputStream.scala:48)

        org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:328)

        org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:384)

        java.base/java.io.ObjectInputStream$PeekInputStream.peek(ObjectInputStream.java:2723)

        java.base/java.io.ObjectInputStream$BlockDataInputStream.peek(ObjectInputStream.java:3050)

        java.base/java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:3060)

        java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1561)

        java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:430)

        org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)

        org.apache.spark.serializer.DeserializationStream$$anon$1.getNext(Serializer.scala:133)

        org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:71)

        org.apache.spark.storage.BlockManager$LazyProxyIterator$1.hasNext(BlockManager.scala:1171)

        scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)

        org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:30)

        org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)

        scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)

        org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:229)

        org.apache.spark.shuffle.hash.HashShuffleReader.read(HashShuffleReader.scala:63)

        org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:92)

        org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)

        org.apache.spark.rdd.RDD.iterator(RDD.scala:229)

        org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)

        org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)

        org.apache.spark.scheduler.Task.run(Task.scala:54)

        org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)

        java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)

        java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)

        java.base/java.lang.Thread.run(Thread.java:834)

Driver stacktrace:

weka.core.WekaException: Job aborted due to stage failure: Task 1 in stage 3.0 failed 1 times, most recent failure: Lost task 1.0 in stage 3.0 (TID 9, localhost): java.lang.NoSuchMethodError: sun.nio.ch.DirectBuffer.cleaner()Lsun/misc/Cleaner;

        org.apache.spark.storage.BlockManager$.dispose(BlockManager.scala:1225)

        org.apache.spark.util.ByteBufferInputStream.cleanUp(ByteBufferInputStream.scala:76)

        org.apache.spark.util.ByteBufferInputStream.read(ByteBufferInputStream.scala:48)

        org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:328)

        org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:384)

        java.base/java.io.ObjectInputStream$PeekInputStream.peek(ObjectInputStream.java:2723)

        java.base/java.io.ObjectInputStream$BlockDataInputStream.peek(ObjectInputStream.java:3050)

        java.base/java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:3060)

        java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1561)

        java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:430)

        org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)

        org.apache.spark.serializer.DeserializationStream$$anon$1.getNext(Serializer.scala:133)

        org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:71)

        org.apache.spark.storage.BlockManager$LazyProxyIterator$1.hasNext(BlockManager.scala:1171)

        scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)

        org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:30)

        org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)

        scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)

        org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:229)

        org.apache.spark.shuffle.hash.HashShuffleReader.read(HashShuffleReader.scala:63)

        org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:92)

        org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)

        org.apache.spark.rdd.RDD.iterator(RDD.scala:229)

        org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)

        org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)

        org.apache.spark.scheduler.Task.run(Task.scala:54)

        org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)

        java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)

        java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)

        java.base/java.lang.Thread.run(Thread.java:834)

Driver stacktrace:

at weka.knowledgeflow.steps.AbstractSparkJob.runJob(AbstractSparkJob.java:294)

at weka.knowledgeflow.steps.AbstractSparkJob.processIncoming(AbstractSparkJob.java:232)

at weka.knowledgeflow.StepManagerImpl.processIncoming(StepManagerImpl.java:1060)

at weka.knowledgeflow.BaseExecutionEnvironment$6.run(BaseExecutionEnvironment.java:493)

at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)

at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)

at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)

at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)

at java.base/java.lang.Thread.run(Thread.java:834)

Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 3.0 failed 1 times, most recent failure: Lost task 1.0 in stage 3.0 (TID 9, localhost): java.lang.NoSuchMethodError: sun.nio.ch.DirectBuffer.cleaner()Lsun/misc/Cleaner;

        org.apache.spark.storage.BlockManager$.dispose(BlockManager.scala:1225)

        org.apache.spark.util.ByteBufferInputStream.cleanUp(ByteBufferInputStream.scala:76)

        org.apache.spark.util.ByteBufferInputStream.read(ByteBufferInputStream.scala:48)

        org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:328)

        org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:384)

        java.base/java.io.ObjectInputStream$PeekInputStream.peek(ObjectInputStream.java:2723)

        java.base/java.io.ObjectInputStream$BlockDataInputStream.peek(ObjectInputStream.java:3050)

        java.base/java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:3060)

        java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1561)

        java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:430)

        org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)

        org.apache.spark.serializer.DeserializationStream$$anon$1.getNext(Serializer.scala:133)

        org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:71)

        org.apache.spark.storage.BlockManager$LazyProxyIterator$1.hasNext(BlockManager.scala:1171)

        scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)

        org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:30)

        org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)

        scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)

        org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:229)

        org.apache.spark.shuffle.hash.HashShuffleReader.read(HashShuffleReader.scala:63)

        org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:92)

        org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)

        org.apache.spark.rdd.RDD.iterator(RDD.scala:229)

        org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)

        org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)

        org.apache.spark.scheduler.Task.run(Task.scala:54)

        org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)

        java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)

        java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)

        java.base/java.lang.Thread.run(Thread.java:834)

Driver stacktrace:

at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1185)

at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1174)

at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1173)

at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)

at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)

at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1173)

at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)

at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)

at scala.Option.foreach(Option.scala:236)

at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:688)

at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1391)

at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)

at akka.actor.ActorCell.invoke(ActorCell.scala:456)

at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)

at akka.dispatch.Mailbox.run(Mailbox.scala:219)

at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)

at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)

at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)

at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)

at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

 

14:49:33: [Low] ArffHeaderSparkJob$512641084|Interrupted

14:49:33: [Low] TextViewer$1290580845|Interrupted

14:49:33: [Low] KMeansClustererSparkJob$178074647|Interrupted

14:49:33: [Low] RandomlyShuffleDataSparkJob$851737944|Interrupted

 

**********In below also you can see relevant info which is required******************

1.       Which OS are you using?

I'm user for  Windows 10 Enterprise.

 

        2.  Which version of Weka are you using?

I'm using Weka 3.8.3 Stable version installed on my computer.

 

        3. Which version of java are you using? 

I have installed jdk-11.0.1 & jre1.8.0_191 on my computer.

 

        4. Is this the distributedWekaSpark package or the distributedWekaSparkDev package?

This is distributedWekaSpark package which I have downloaded and installed from Weka Package manager. I don't have distributedWekaSparkDev on my computer. 

 

        5. Do you have anything set in your CLASSPATH environment variable?

C:\Program Files\Java\jre1.8.0_191\bin; C:\Program Files\Java\jdk-11.0.1\bin; C:\Program Files\Weka-3-8\weka.jar; which are set under the Environment Variables, under System variables, I didn't set any CLASSPATH under User Variables on my local account, by the way, I have admin domain and I have admin permission as well.

       6. What other Weka packages do you have installed? The AutoWeka package is known to cause problems.

1.  Auto-WEKA  Classification, Regression, Attribute Selection     2.6.1 2.6.1 Yes

2.  classificationViaClustering Classification  1.0.8 1.0.8 Yes

3.  classifierBasedAttributeSelection     Attribute selection     1.0.5 1.0.5 Yes

4.  distributedWekaBase   Distributed     1.0.17     1.0.17     Yes

5.  distributedWekaSpark  Distributed     1.0.9 1.0.9 Yes

6.  gridSearch Classification  1.0.12     1.0.12     Yes

7.  LibLINEAR  Classification  1.9.8 1.9.8 Yes

8.  LibSVM     Classification, Regression 1.0.10     1.0.10     Yes

9.  massiveOnlineAnalysis Data streams    2018.06.0  2018.06.0     Yes

10.         multilayerPerceptronCS     Classification  1.0.1     1.0.1 Yes

11.         multiLayerPerceptrons Classification/regression, Preprocessing   1.0.10     1.0.10     Yes

12.         multisearch     Classification  2017.3.28  2017.3.28     Yes

13.         naiveBayesTree  Classification  1.0.4 1.0.4 Yes

14.         niftiLoader     Converter  1.0.1 1.0.1 Yes

15.         normalize  Preprocessing   1.0.1 1.0.1 Yes

16.         oneClassClassifier    Classification  1.0.4 1.0.4 Yes

17.         RBFNetwork Classification/regression  1.0.8 1.0.8 Yes

18.         rotationForest  Ensemble learning     1.0.2 1.0.2 Yes

19.         RPlugin    R integration   1.2.27     1.2.27     Yes

20.         scatterPlot3D   Visualization   1.0.7 1.0.7 Yes

21.         StudentFilters  Preprocessing   2.0.0 2.0.0 Yes

22.         supervisedAttributeScaling Preprocessing   1.0.2     1.0.2 Yes

23.         thresholdSelector     Classification  1.0.3 1.0.3 Yes

24.         timeseriesForecasting Time series     1.0.25     1.0.25     Yes

25.         userClassifier  Classification/regression  1.0.3     1.0.3 Yes

26.         wekaDeeplearning4j    Classification/Regression     1.5.12     1.5.12     Yes

27.         WekaExcel  Converter  1.0.7 1.0.7 Yes

28.         WekaODF    Converter  1.0.4 1.0.4 Yes

By the way, I think I'm using Java 1.8 as well, I hope now the aforementioned info can help in this situation. Once again I'm really appreciated for your help and consideration. I will look forward to hearing your valuable response to this matter.

 

 

 

I will look forward to hearing from you about this matter.

 

Thanks in advance

 

With best regards 

Mehdi Jahanirad

BIT (Hons) (MMU), MCompSc (Malaya), PhD (Malaya)

 


_______________________________________________
Wekalist mailing list
Send posts to: [hidden email]
To subscribe, unsubscribe, etc., visit https://list.waikato.ac.nz/mailman/listinfo/wekalist
List etiquette: http://www.cs.waikato.ac.nz/~ml/weka/mailinglist_etiquette.html
Reply | Threaded
Open this post in threaded view
|

Re: Errors in templates examples in Weka Distributed Spark

omcgrath
In reply to this post by Mehdi Jahanirad
On Windows 10, I had the same problem with Weka 3.8.4 Windows 64 bit, which
comes w/AZUL ZULU:

https://waikato.github.io/weka-wiki/downloading_weka/#windows  

I uninstalled and replaced with the Weka 3.8.3 Windows JRE X64
linked to here:

https://community.hitachivantara.com/s/article/data-mining-weka  

Now all the Spark templates work fine in KnowledgeFlow.
Perhaps the problem relates to this issue identified and addressed by the
Apache Spark folks:

https://github.com/apache/spark/pull/22993 

Owen



--
Sent from: https://weka.8497.n7.nabble.com/
_______________________________________________
Wekalist mailing list -- [hidden email]
Send posts to [hidden email]
To unsubscribe send an email to [hidden email]
To subscribe, unsubscribe, etc., visit https://list.waikato.ac.nz/postorius/lists/wekalist.list.waikato.ac.nz
List etiquette: http://www.cs.waikato.ac.nz/~ml/weka/mailinglist_etiquette.html