hadoop - Using DBOutputFormat to write data to Mysql causes IOException -


recently, learning mapreduce , use write data mysql database. there 2 ways so, dboutputformat , sqoop. tried first 1 (refer here), encountered problem, following error:

... 16/05/25 09:36:53 info mapred.localjobrunner: 3 / 3 copied. 16/05/25 09:36:53 info mapred.localjobrunner: reduce task executor complete. 16/05/25 09:36:53 warn output.fileoutputcommitter: output path null in cleanupjob() 16/05/25 09:36:53 warn mapred.localjobrunner: job_local1404930626_0001 java.lang.exception: java.io.ioexception     @ org.apache.hadoop.mapred.localjobrunner$job.runtasks(localjobrunner.java:462)     @ org.apache.hadoop.mapred.localjobrunner$job.run(localjobrunner.java:529) caused by: java.io.ioexception     @ org.apache.hadoop.mapreduce.lib.db.dboutputformat.getrecordwriter(dboutputformat.java:185)     @ org.apache.hadoop.mapred.reducetask$newtrackingrecordwriter.<init>(reducetask.java:540)     @ org.apache.hadoop.mapred.reducetask.runnewreducer(reducetask.java:614)     @ org.apache.hadoop.mapred.reducetask.run(reducetask.java:389)     @ org.apache.hadoop.mapred.localjobrunner$job$reducetaskrunnable.run(localjobrunner.java:319)     @ java.util.concurrent.executors$runnableadapter.call(executors.java:471)     @ java.util.concurrent.futuretask.run(futuretask.java:262)     @ java.util.concurrent.threadpoolexecutor.runworker(threadpoolexecutor.java:1145)     @ java.util.concurrent.threadpoolexecutor$worker.run(threadpoolexecutor.java:615)     @ java.lang.thread.run(thread.java:745) 16/05/25 09:36:54 info mapreduce.job: job job_local1404930626_0001 failed state failed due to: na 16/05/25 09:36:54 info mapreduce.job: counters: 38 file system counters   file: number of bytes read=32583     file: number of bytes written=796446     file: number of read operations=0     file: number of large read operations=0     file: number of write operations=0     hdfs: number of bytes read=402     hdfs: number of bytes written=0     hdfs: number of read operations=18     hdfs: number of large read operations=0     hdfs: number of write operations=0 ... 

while manually use jdbc connect , insert data, turns out successful. , notice map/reduce task executors complete, encounters ioexception. guess problem database-related.

my code here. appriciated if 1 me figure out problem.

thanks in advance!


Comments

Popular posts from this blog

scala - 'wrong top statement declaration' when using slick in IntelliJ -

c# - DevExpress.Wpf.Grid.InfiniteGridSizeException was unhandled -

PySide and Qt Properties: Connecting signals from Python to QML -