cql - NoHostAvailableException Found at the Time of data reading from Cassandra -
i using cassandra 2.1.5, , cassandra-java-driver 2.0.10. facing below exception when fetching data cassandra table.
com.datastax.driver.core.exceptions.nohostavailableexception: host(s) tried query failed (tried: /127.0.0.1:9042 (com.datastax.driver.core.transportexception: [/127.0.0.1:9042] connection has been closed)) @ com.datastax.driver.core.exceptions.nohostavailableexception.copy(nohostavailableexception.java:84) @ com.datastax.driver.core.defaultresultsetfuture.extractcausefromexecutionexception(defaultresultsetfuture.java:265) @ com.datastax.driver.core.defaultresultsetfuture.getuninterruptibly(defaultresultsetfuture.java:179) @ com.datastax.driver.core.abstractsession.execute(abstractsession.java:52) @ com.datastax.driver.core.abstractsession.execute(abstractsession.java:36) @ sun.reflect.nativemethodaccessorimpl.invoke0(native method) @ sun.reflect.nativemethodaccessorimpl.invoke(unknown source) @ sun.reflect.delegatingmethodaccessorimpl.invoke(unknown source) @ java.lang.reflect.method.invoke(unknown source) @ com.datastax.spark.connector.cql.sessionproxy.invoke(sessionproxy.scala:33) @ com.sun.proxy.$proxy8.execute(unknown source) @ com.exportstagging.sparktest.productdataloader.dbquery(productdataloader.java:418) @ com.exportstagging.sparktest.productdataloader.main(productdataloader.java:442) caused by: com.datastax.driver.core.exceptions.nohostavailableexception: host(s) tried query failed (tried: /127.0.0.1:9042 (com.datastax.driver.core.transportexception: [/127.0.0.1:9042] connection has been closed)) @ com.datastax.driver.core.requesthandler.reportnomorehosts(requesthandler.java:216) @ com.datastax.driver.core.requesthandler.access$900(requesthandler.java:45) @ com.datastax.driver.core.requesthandler$speculativeexecution.sendrequest(requesthandler.java:276) @ com.datastax.driver.core.requesthandler$speculativeexecution$1.run(requesthandler.java:374) @ java.util.concurrent.threadpoolexecutor.runworker(unknown source) @ java.util.concurrent.threadpoolexecutor$worker.run(unknown source) @ java.lang.thread.run(unknown source)
in cassandra table there 50000 columns , 380000 rows. when fire query getting above error.
select * mykeyspace.productdata id in (1,...,6000).
i have used token create batching fetching data cassandra.
trying fectch 6000 partitions @ once, on 380000 rows table 50000 columns seems totally overkill.
multi partition select should done using asynchronous queries, using 1 query per partition. having 50000 columns in table looks data modeling problem. use case ? fetching 380000 rows @ once shouldn't necessary (and take long time anyway), , looks analytical query should way better handled through spark.
trying data @ once, getting oom on cassandra node, explain why you're getting message "connection has been closed"
my advice review model, , try split load way should (async queries) , use proper paging. if want crunch data @ once, have spark through batch processing, output result in cassandra table, , access result table through smaller, faster interactive queries.
Comments
Post a Comment