spring mvc + mybatis + mysql5.7 save less time for 8 hours - mysql

spring mvc + mybatis + mysql 5.7 + jdk8
I use mysql 5.7 for save json data.
I use jdk8 time API LocalDateTime now = LocalDateTime.now(); to get datetime and save to database.
But I fund the result of database are less time for about 8 hours.
My treatment process
1、Code problems, debug it, and I find the time in the object is right before it save to the database.
2、System time zone problem, the investigation of the local computer system, server system, database system and other computer systems, time zones are all East eight district no problem. (I am Chinese)
3、View console print SQL
DEBUG [ 2017-08-23 12:42:35 970 ]: org.apache.ibatis.logging.jdbc.BaseJdbcLogger.debug(BaseJdbcLogger.java:145) - ==> Preparing: insert into log_user_operation (pk_id, user_code, user_name, login_ip, url, operation_type, operation_content, remark, create_time) values (?, ?, ?, ?, ?, ?, ?, ?, ?)
DEBUG [ 2017-08-23 12:42:36 005 ]: org.apache.ibatis.logging.jdbc.BaseJdbcLogger.debug(BaseJdbcLogger.java:145) - ==> Parameters: null, admin(String), Admin(String), 127.0.0.1(String), http://localhost:8080/bi/login-check(String), SELECT(String), 用户登录(String), (String), 2017-08-23 12:42:32.9(Timestamp)
DEBUG [ 2017-08-23 12:42:36 016 ]: org.apache.ibatis.logging.jdbc.BaseJdbcLogger.debug(BaseJdbcLogger.java:145) - <== Updates: 1
DEBUG [ 2017-08-23 12:42:36 020 ]: org.apache.ibatis.logging.jdbc.BaseJdbcLogger.debug(BaseJdbcLogger.java:145) - ==> Preparing: SELECT LAST_INSERT_ID()
DEBUG [ 2017-08-23 12:42:36 021 ]: org.apache.ibatis.logging.jdbc.BaseJdbcLogger.debug(BaseJdbcLogger.java:145) - ==> Parameters:
TRACE [ 2017-08-23 12:42:36 039 ]: org.apache.ibatis.logging.jdbc.BaseJdbcLogger.trace(BaseJdbcLogger.java:151) - <== Columns: LAST_INSERT_ID()
TRACE [ 2017-08-23 12:42:36 039 ]: org.apache.ibatis.logging.jdbc.BaseJdbcLogger.trace(BaseJdbcLogger.java:151) - <== Row: 47
DEBUG [ 2017-08-23 12:42:36 042 ]: org.apache.ibatis.logging.jdbc.BaseJdbcLogger.debug(BaseJdbcLogger.java:145) - <== Total: 1
DEBUG [ 2017-08-23 12:42:36 047 ]: org.mybatis.spring.SqlSessionUtils.closeSqlSession(SqlSessionUtils.java:193) - Closing non transactional SqlSession [org.apache.ibatis.session.defaults.DefaultSqlSession#516c8654]
DEBUG [ 2017-08-23 12:42:36 048 ]: org.springframework.jdbc.datasource.DataSourceUtils.doReleaseConnection(DataSourceUtils.java:332) - Returning JDBC Connection to DataSource
I feel that everything is OK, but the database save is wrong.
the test result of right now

The problem that has been bothering for two days is finally settled this evening.
This is due to the URL specification of the new MySQL drive package:
jdbc:mysql://localhost:3306/ss?characterEncoding=utf8&useSSL=true&serverTimezone=UTC&nullNamePatternMatchesAll=true
Modify serverTimezone=UTC to serverTimezone=Hongkong can solve the problem successfully。

Related

Just installed PhpStorm 2021.3.2 crashes on startup

I've fully deleted PhpStorm 2019, all plugins and cache, downloaded and installed PhpStorm 2021.3.2 and it crashes on startup. Reinstallation with cleaning-up doesn't help.
idea.log (I had to cut out some of the logs because StackOverflow wouldn't let me publish the question)
2022-02-15 20:59:56,892 [ 263] INFO - #com.intellij.idea.Main - CPU cores: 8; ForkJoinPool.commonPool: java.util.concurrent.ForkJoinPool#9a2e852[Running, parallelism = 7, size = 6, active = 3, running = 3, steals = 3, tasks = 0, submissions = 0]; factory: com.intellij.concurrency.IdeaForkJoinWorkerThreadFactory#bd6cba9
2022-02-15 20:59:57,533 [ 904] INFO - #com.intellij.idea.Main - JNA library (64-bit) loaded in 641 ms
2022-02-15 20:59:57,772 [ 1143] INFO - penapi.util.io.win32.IdeaWin32 - Native filesystem for Windows is operational
2022-02-15 20:59:58,137 [ 1508] INFO - .intellij.util.io.HttpRequests - Application is not initialized yet; Using default SSL configuration to connect to https://www.jetbrains.com/config/IdeaCloudConfig.xml
2022-02-15 20:59:58,818 [ 2189] INFO - nfig.ETagCloudConfigFileClient - === Get cloud config URL: https://cloudconfig.jetbrains.com/cloudconfig/files ===
2022-02-15 20:59:59,331 [ 2702] ERROR - Config.CloudConfigProviderImpl - Invalid credentials
com.jetbrains.cloudconfig.exception.UnauthorizedException: Invalid credentials
at com.jetbrains.cloudconfig.AbstractHttpClient.download(AbstractHttpClient.java:95)
at com.jetbrains.cloudconfig.CloudConfigFileClient.list(CloudConfigFileClient.java:192)
at com.intellij.idea.cloudConfig.ETagCloudConfigFileClient.list(ETagCloudConfigFileClient.java:36)
at com.intellij.idea.cloudConfig.CloudConfigProviderImpl.<init>(CloudConfigProviderImpl.java:46)
at com.intellij.idea.MainImpl.beforeImportConfigs(MainImpl.java:47)
at com.intellij.idea.StartupUtil.importConfig(StartupUtil.java:404)
at com.intellij.idea.StartupUtil.lambda$start$16(StartupUtil.java:280)
at java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1072)
at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1705)
at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.exec(CompletableFuture.java:1692)
at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)
Caused by: java.io.IOException: Server returned HTTP response code: 401 for URL: https://cloudconfig.jetbrains.com/cloudconfig/files/PhpStorm/
at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1924)
at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1520)
at java.base/sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:250)
at com.jetbrains.cloudconfig.AbstractHttpClient.download(AbstractHttpClient.java:92)
... 15 more
2022-02-15 20:59:59,336 [ 2707] ERROR - Config.CloudConfigProviderImpl - PhpStorm 2021.3.2 Build #PS-213.6777.58
2022-02-15 20:59:59,340 [ 2711] ERROR - Config.CloudConfigProviderImpl - JDK: 11.0.13; VM: OpenJDK 64-Bit Server VM; Vendor: JetBrains s.r.o.
2022-02-15 20:59:59,340 [ 2711] ERROR - Config.CloudConfigProviderImpl - OS: Windows 10
2022-02-15 20:59:59,345 [ 2716] INFO - #com.intellij.idea.Main - Importing configs to C:\Users\Semyon\AppData\Roaming\JetBrains\PhpStorm2021.3
2022-02-15 20:59:59,364 [ 2735] INFO - #com.intellij.idea.Main - Importing configs: oldConfigDir=[C:\Users\Semyon\.PhpStorm2021.3.2\config], newConfigDir=[C:\Users\Semyon\AppData\Roaming\JetBrains\PhpStorm2021.3], oldIdeHome=[null], oldPluginsDir=[C:\Users\Semyon\.PhpStorm2021.3.2\config\plugins], newPluginsDir=[C:\Users\Semyon\AppData\Roaming\JetBrains\PhpStorm2021.3\plugins]
2022-02-15 21:00:00,178 [ 3549] INFO - llij.ide.plugins.PluginManager - Plugin PluginDescriptor(name=PHP, id=com.jetbrains.php, descriptorPath=plugin.xml, path=C:\Program Files\JetBrains\PhpStorm 2021.3.2\plugins\php, version=213.6777.58, package=null, isBundled=true) misses optional descriptor php-shared-indexes.xml
2022-02-15 21:00:00,964 [ 4335] INFO - #com.intellij.idea.Main - Migrating plugin io.pmmp.phpstorm.stub version 3.11.5
2022-02-15 21:00:00,996 [ 4367] INFO - #com.intellij.idea.Main - The vmoptions file has changed, restarting...
2022-02-15 21:00:01,013 [ 4384] INFO - #com.intellij.util.Restarter - run restarter: [C:\Program Files\JetBrains\PhpStorm 2021.3.2\bin\restarter.exe, 14340, 1, C:\Program Files\JetBrains\PhpStorm 2021.3.2\bin\phpstorm64.exe]
2022-02-15 21:00:01,589 [ 4960] INFO - #com.intellij.idea.Main - ------------------------------------------------------ IDE SHUTDOWN ------------------------------------------------------
2022-02-15 21:00:01,594 [ 4965] INFO - org.jetbrains.io.BuiltInServer - web server stopped

Liquibase runs changeset again when starting Spring boot application in debug in idea

when i start my spring boot application in the default run mode it executes the liquibase statement as normal.
Stopping and running again skips the changeset as normal, since it was already executed.
starting in debug mode however tries to execute the changeset again even though it ran already.
I turned it around and first executed the change in debug which neatly creates the entry in the databasechangelog
and when running it a second time in non debug mode the same happens
My changes are in native sql format, the database is mariadb with the mysql connector
I compared the md5 sum of both databasechangelog (run first vs debug first) entries and they are the same.
liquibase version is 3.5.4 I also tested 3.5.5 with the same result
mysql Ver 15.1 Distrib 10.0.34-MariaDB, for debian-linux-gnu (x86_64) using readline 5.2
here is also the logging part where liquibase fails
2018-05-01 13:54:00.610 DEBUG 27486 --- [ main] liquibase : Connected to xxx#localhost#jdbc:mysql://localhost:3306/project1_db?nullNamePatternMatchesAll=true
2018-05-01 13:54:00.610 DEBUG 27486 --- [ main] liquibase : Setting auto commit to false from true
2018-05-01 13:54:00.642 DEBUG 27486 --- [ main] liquibase : Executing QUERY database command: select count(*) from project1_db.DATABASECHANGELOGLOCK
2018-05-01 13:54:00.647 DEBUG 27486 --- [ main] liquibase : Executing QUERY database command: select count(*) from project1_db.DATABASECHANGELOGLOCK
2018-05-01 13:54:00.648 DEBUG 27486 --- [ main] liquibase : Executing QUERY database command: SELECT LOCKED FROM project1_db.DATABASECHANGELOGLOCK WHERE ID=1
2018-05-01 13:54:00.649 DEBUG 27486 --- [ main] liquibase : Lock Database
2018-05-01 13:54:00.655 DEBUG 27486 --- [ main] liquibase : Executing UPDATE database command: UPDATE project1_db.DATABASECHANGELOGLOCK SET LOCKED = 1, LOCKEDBY = 'xxx.xxx.xxx.xxx (xxx.xxx.xxx.xxx)', LOCKGRANTED = '2018-05-01 13:54:00.650' WHERE ID = 1 AND LOCKED = 0
2018-05-01 13:54:00.657 INFO 27486 --- [ main] liquibase : Successfully acquired change log lock
2018-05-01 13:54:00.672 DEBUG 27486 --- [ main] liquibase : Opening file:/home/xxx/project1-connector/project1-backend/target/classes/db/changelog/db.changelog-master.yaml as classpath:/db/changelog/db.changelog-master.yaml
2018-05-01 13:54:00.687 DEBUG 27486 --- [ main] liquibase : includeAll for db/changelog/changes/
2018-05-01 13:54:00.687 DEBUG 27486 --- [ main] liquibase : Using file opener for includeAll: liquibase.integration.spring.SpringLiquibase$SpringResourceOpener(jdk.internal.loader.ClassLoaders$AppClassLoader)
2018-05-01 13:54:00.690 DEBUG 27486 --- [ main] liquibase : Opening file:/home/xxx/project1-connector/project1-backend/target/classes/db/changelog/changes/db.change.sql as db/changelog/changes/db.change.sql
2018-05-01 13:54:00.690 DEBUG 27486 --- [ main] liquibase : Opening file:/home/xxx/project1-connector/project1-backend/target/classes/db/changelog/changes/db.change.sql as db/changelog/changes/db.change.sql
2018-05-01 13:54:00.694 DEBUG 27486 --- [ main] liquibase : Computed checksum for 1525175640693 as 422ae5f56810de3fc5eeb17bb4af5afe
2018-05-01 13:54:00.710 DEBUG 27486 --- [ main] liquibase : Executing QUERY database command: SELECT MD5SUM FROM project1_db.DATABASECHANGELOG WHERE MD5SUM IS NOT NULL LIMIT 1
2018-05-01 13:54:00.711 DEBUG 27486 --- [ main] liquibase : Executing QUERY database command: select count(*) from project1_db.DATABASECHANGELOG
2018-05-01 13:54:00.712 INFO 27486 --- [ main] liquibase : Reading from project1_db.DATABASECHANGELOG
2018-05-01 13:54:00.712 DEBUG 27486 --- [ main] liquibase : Executing QUERY database command: SELECT * FROM project1_db.DATABASECHANGELOG ORDER BY DATEEXECUTED ASC, ORDEREXECUTED ASC
2018-05-01 13:54:00.716 DEBUG 27486 --- [ main] liquibase : classpath:/db/changelog/db.changelog-master.yaml: db/changelog/changes/db.change.sql::basicdata::xxx: Computed checksum for inputStream as 0c73ccd0174246a5a7fab00d26cc30d2
2018-05-01 13:54:00.720 DEBUG 27486 --- [ main] liquibase : classpath:/db/changelog/db.changelog-master.yaml: db/changelog/changes/db.change.sql::basicdata::xxx: Computed checksum for 7:0c73ccd0174246a5a7fab00d26cc30d2: as 22c8e24ae058e8e523819972d470a98a
2018-05-01 13:54:00.721 DEBUG 27486 --- [ main] liquibase : classpath:/db/changelog/db.changelog-master.yaml: db/changelog/changes/db.change.sql::basicdata::xxx: Running Changeset:db/changelog/changes/db.change.sql::basicdata::xxx
2018-05-01 13:54:00.721 DEBUG 27486 --- [ main] liquibase : classpath:/db/changelog/db.changelog-master.yaml: db/changelog/changes/db.change.sql::basicdata::xxx: Changeset db/changelog/changes/db.change.sql::basicdata::xxx
2018-05-01 13:54:00.721 DEBUG 27486 --- [ main] liquibase : classpath:/db/changelog/db.changelog-master.yaml: db/changelog/changes/db.change.sql::basicdata::xxx: Reading ChangeSet: db/changelog/changes/db.change.sql::basicdata::xxx
2018-05-01 13:54:00.727 DEBUG 27486 --- [ main] liquibase : classpath:/db/changelog/db.changelog-master.yaml: db/changelog/changes/db.change.sql::basicdata::xxx: Executing Statement: insert into company (id, name, created_at, created_by)
values (1, 'mycompany', now(), 'xxx')
2018-05-01 13:54:00.728 DEBUG 27486 --- [ main] liquibase : classpath:/db/changelog/db.changelog-master.yaml: db/changelog/changes/db.change.sql::basicdata::xxx: Executing EXECUTE database command: insert into company (id, name, created_at, created_by)
values (1, 'mycompany', now(), 'xxx')
2018-05-01 13:54:00.733 ERROR 27486 --- [ main] liquibase : classpath:/db/changelog/db.changelog-master.yaml: db/changelog/changes/db.change.sql::basicdata::xxx: Change Set db/changelog/changes/db.change.sql::basicdata::xxx failed. Error: Duplicate entry '1' for key 'PRIMARY' [Failed SQL: insert into company (id, name, created_at, created_by)
values (1, 'mycompany', now(), 'xxx')]
I guess that it could be the problem with logicalFilePath. Maybe your classpath is slightnly different in debug and in normal run. Try to specify it in your sql files. look here for info.

nginx error message while deploying rails app

It is my first time to deploy an application.
I am working on a ruby on rails app using latest version, and following that tutorial: Deploy Ruby On Rails on Ubuntu 16.04 Xenial Xerus
everything was going right,when restarting my application using touch my_app_name/current/tmp/restart.txt, I get the attached nginx error
I tried to pick the error log from nginx using:
sudo tail -n 20 /var/log/nginx/error.log
and got the following:
[ N 2017-10-08 10:02:46.2189 29260/T6 Ser/Server.h:531 ]: [ServerThr.1] Shutdown finished
[ N 2017-10-08 10:02:46.2192 29260/T1 age/Cor/CoreMain.cpp:917 ]: Checking whether to disconnect long-running connections for process 30514, application /home/deploy/myapp/current/public (production)
[ N 2017-10-08 10:02:46.2274 29266/T3 age/Ust/UstRouterMain.cpp:430 ]: Signal received. Gracefully shutting down... (send signal 2 more time(s) to force shutdown)
[ N 2017-10-08 10:02:46.2279 29266/T1 age/Ust/UstRouterMain.cpp:500 ]: Received command to shutdown gracefully. Waiting until all clients have disconnected...
[ N 2017-10-08 10:02:46.2281 29266/T5 Ser/Server.h:886 ]: [UstRouterApiServer] Freed 0 spare client objects
[ N 2017-10-08 10:02:46.2282 29266/T5 Ser/Server.h:531 ]: [UstRouterApiServer] Shutdown finished
[ N 2017-10-08 10:02:46.2313 29266/T3 Ser/Server.h:531 ]: [UstRouter] Shutdown finished
[ N 2017-10-08 10:02:46.3166 29266/T1 age/Ust/UstRouterMain.cpp:531 ]: Passenger UstRouter shutdown finished
[ N 2017-10-08 10:02:46.7083 29260/T1 age/Cor/CoreMain.cpp:1068 ]: Passenger core shutdown finished
2017/10/08 10:02:47 [info] 30632#30632: Using 32768KiB of shared memory for nchan in /etc/nginx/nginx.conf:71
[ N 2017-10-08 10:02:47.8959 30639/T1 age/Wat/WatchdogMain.cpp:1283 ]: Starting Passenger watchdog...
[ N 2017-10-08 10:02:47.9446 30642/T1 age/Cor/CoreMain.cpp:1083 ]: Starting Passenger core...
[ N 2017-10-08 10:02:47.9459 30642/T1 age/Cor/CoreMain.cpp:248 ]: Passenger core running in multi-application mode.
[ N 2017-10-08 10:02:47.9815 30642/T1 age/Cor/CoreMain.cpp:830 ]: Passenger core online, PID 30642
[ N 2017-10-08 10:02:48.0532 30648/T1 age/Ust/UstRouterMain.cpp:537 ]: Starting Passenger UstRouter...
[ N 2017-10-08 10:02:48.0571 30648/T1 age/Ust/UstRouterMain.cpp:350 ]: Passenger UstRouter online, PID 30648
[ N 2017-10-08 10:02:50.4687 30642/T8 age/Cor/SecurityUpdateChecker.h:374 ]: Security update check: no update found (next check in 24 hours)
App 30667 stdout:
App 30737 stdout:
#dstull I do not know how to thank you brother, You got the point.It was an issue with my rails app. I finished my app in the development level and I was using a theme (bootstrap theme that I bought). My app was trying to access method with nil values, since there is nothing initialized yet.

Apache Pig error while dumping Json data

I have a JSON file and want to load using Apache Pig.
I am using the built-in JSONLOADER to load json data, Below is the sample json data.
cat jsondata1.json
{ "response": { "id": 10123, "thread": "Sloths", "comments": ["Sloths are adorable So chill"] }, "response_time": 0.425 }
{ "response": { "id": 13828, "thread": "Bigfoot", "comments": ["hello world"] } , "response_time": 0.517 }
Here I loading json data using builtin Json loader. While loading there is no error, but while dumping the data it gives the following error.
grunt> a = load '/home/cloudera/jsondata1.json' using JsonLoader('response:tuple (id:int, thread:chararray, comments:bag {tuple(comment:chararray)}), response_time:double');
grunt> dump a;
2016-04-17 01:11:13,286 [pool-4-thread-1] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader - Current split being processed file:/home/cloudera/jsondata1.json:0+229
2016-04-17 01:11:13,287 [pool-4-thread-1] WARN org.apache.hadoop.conf.Configuration - dfs.https.address is deprecated. Instead, use dfs.namenode.https-address
2016-04-17 01:11:13,311 [pool-4-thread-1] WARN org.apache.pig.data.SchemaTupleBackend - SchemaTupleBackend has already been initialized
2016-04-17 01:11:13,321 [pool-4-thread-1] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map - Aliases being processed per job phase (AliasName[line,offset]): M: a[5,4] C: R:
2016-04-17 01:11:13,349 [Thread-16] INFO org.apache.hadoop.mapred.LocalJobRunner - Map task executor complete.
2016-04-17 01:11:13,351 [Thread-16] WARN org.apache.hadoop.mapred.LocalJobRunner - job_local801054416_0004
java.lang.Exception: org.codehaus.jackson.JsonParseException: Current token (FIELD_NAME) not numeric, can not use numeric value accessors
at [Source: java.io.ByteArrayInputStream#2484de3c; line: 1, column: 120]
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:406)
Caused by: org.codehaus.jackson.JsonParseException: Current token (FIELD_NAME) not numeric, can not use numeric value accessors
at [Source: java.io.ByteArrayInputStream#2484de3c; line: 1, column: 120]
at org.codehaus.jackson.JsonParser._constructError(JsonParser.java:1291)
at org.codehaus.jackson.impl.JsonParserMinimalBase._reportError(JsonParserMinimalBase.java:385)
at org.codehaus.jackson.impl.JsonNumericParserBase._parseNumericValue(JsonNumericParserBase.java:399)
at org.codehaus.jackson.impl.JsonNumericParserBase.getDoubleValue(JsonNumericParserBase.java:311)
at org.apache.pig.builtin.JsonLoader.readField(JsonLoader.java:203)
at org.apache.pig.builtin.JsonLoader.getNext(JsonLoader.java:157)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:211)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:483)
at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:76)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:85)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:139)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:672)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:330)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:268)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
2016-04-17 01:11:13,548 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - HadoopJobId: job_local801054416_0004
2016-04-17 01:11:13,548 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Processing aliases a
2016-04-17 01:11:13,548 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - detailed locations: M: a[5,4] C: R:
2016-04-17 01:11:18,059 [main] WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Ooops! Some job has failed! Specify -stop_on_failure if you want Pig to stop immediately on failure.
2016-04-17 01:11:18,059 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - job job_local801054416_0004 has failed! Stop running all dependent jobs
2016-04-17 01:11:18,059 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
2016-04-17 01:11:18,059 [main] ERROR org.apache.pig.tools.pigstats.PigStatsUtil - 1 map reduce job(s) failed!
2016-04-17 01:11:18,060 [main] INFO org.apache.pig.tools.pigstats.SimplePigStats - Detected Local mode. Stats reported below may be incomplete
2016-04-17 01:11:18,060 [main] INFO org.apache.pig.tools.pigstats.SimplePigStats - Script Statistics:
HadoopVersion PigVersion UserId StartedAt FinishedAt Features
2.0.0-cdh4.7.0 0.11.0-cdh4.7.0 cloudera 2016-04-17 01:11:12 2016-04-17 01:11:18 UNKNOWN
Failed!
Failed Jobs:
JobId Alias Feature Message Outputs
job_local801054416_0004 a MAP_ONLY Message: Job failed! file:/tmp/temp-1766116741/tmp1151698221,
Input(s):
Failed to read data from "/home/cloudera/jsondata1.json"
Output(s):
Failed to produce result in "file:/tmp/temp-1766116741/tmp1151698221"
Job DAG:
job_local801054416_0004
2016-04-17 01:11:18,060 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Failed!
2016-04-17 01:11:18,061 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1066: Unable to open iterator for alias a
Details at logfile: /home/cloudera/pig_1460877001124.log
I could not able to find the issue. Can I know how to define the correct schema for the above json data?.
Try this:
comments:{(chararray)}
because this version:
comments:bag {tuple(comment:chararray)}
fits this JSON schema:
"comments": [{comment:"hello world"}]
and you have simple string values, not another nested documents:
"comments": ["hello world"]

JsonLoader throws error in pig

I am unable to decode this simple json , i dont know what i am doing wrong.
please help me in this pig script.
I have to decode the below data in json format.
3.json
{
"id": 6668,
"source_name": "National Stock Exchange of India",
"source_code": "NSE"
}
and my pig script is
a = LOAD '3.json' USING org.apache.pig.builtin.JsonLoader ('id:int, source_name:chararray, source_code:chararray');
dump a;
the error i get is given below:
2015-07-23 13:40:08,715 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.LocalJobRunner - Starting task: attempt_local1664361500_0001_m_000000_0
2015-07-23 13:40:08,775 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.Task - Using ResourceCalculatorProcessTree : [ ]
2015-07-23 13:40:08,780 [LocalJobRunner Map Task Executor #0] INFO org.apache.hadoop.mapred.MapTask - Processing split: Number of splits :1
Total Length = 88
Input split[0]:
Length = 88
Locations:
-----------------------
2015-07-23 13:40:08,793 [LocalJobRunner Map Task Executor #0] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader - Current split being processed file:/home/hariprasad.sudo/3.json:0+88
2015-07-23 13:40:08,844 [LocalJobRunner Map Task Executor #0] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map - Aliases being processed per job phase (AliasName[line,offset]): M: a[1,4] C: R:
2015-07-23 13:40:08,861 [Thread-5] INFO org.apache.hadoop.mapred.LocalJobRunner - map task executor complete.
2015-07-23 13:40:08,867 [Thread-5] WARN org.apache.hadoop.mapred.LocalJobRunner - job_local1664361500_0001
java.lang.Exception: org.codehaus.jackson.JsonParseException: Unexpected end-of-input: expected close marker for OBJECT (from [Source: java.io.ByteArrayInputStream#61a79110; line: 1, column: 0])
at [Source: java.io.ByteArrayInputStream#61a79110; line: 1, column: 3]
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
Caused by: org.codehaus.jackson.JsonParseException: Unexpected end-of-input: expected close marker for OBJECT (from [Source: java.io.ByteArrayInputStream#61a79110; line: 1, column: 0])
at [Source: java.io.ByteArrayInputStream#61a79110; line: 1, column: 3]
at org.codehaus.jackson.JsonParser._constructError(JsonParser.java:1291)
at org.codehaus.jackson.impl.JsonParserMinimalBase._reportError(JsonParserMinimalBase.java:385)
at org.codehaus.jackson.impl.JsonParserMinimalBase._reportInvalidEOF(JsonParserMinimalBase.java:318)
at org.codehaus.jackson.impl.JsonParserBase._handleEOF(JsonParserBase.java:354)
at org.codehaus.jackson.impl.Utf8StreamParser._skipWSOrEnd(Utf8StreamParser.java:1841)
at org.codehaus.jackson.impl.Utf8StreamParser.nextToken(Utf8StreamParser.java:275)
at org.apache.pig.builtin.JsonLoader.readField(JsonLoader.java:180)
at org.apache.pig.builtin.JsonLoader.getNext(JsonLoader.java:164)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:211)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:533)
at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
2015-07-23 13:40:09,179 [main] WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Ooops! Some job has failed! Specify -stop_on_failure if you want Pig to stop immediately on failure.
2015-07-23 13:40:09,179 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - job job_local1664361500_0001 has failed! Stop running all dependent jobs
2015-07-23 13:40:09,179 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
2015-07-23 13:40:09,180 [main] ERROR org.apache.pig.tools.pigstats.PigStatsUtil - 1 map reduce job(s) failed!
2015-07-23 13:40:09,180 [main] INFO org.apache.pig.tools.pigstats.SimplePigStats - Detected Local mode. Stats reported below may be incomplete
2015-07-23 13:40:09,181 [main] INFO org.apache.pig.tools.pigstats.SimplePigStats - Script Statistics:
HadoopVersion PigVersion UserId StartedAt FinishedAt Features
2.3.0-cdh5.1.3 0.12.0-cdh5.1.3 hariprasad.sudo 2015-07-23 13:40:07 2015-07-23 13:40:09 UNKNOWN
Failed!
Failed Jobs:
JobId Alias Feature Message Outputs
job_local1664361500_0001 a MAP_ONLY Message: Job failed! file:/tmp/temp-65649055/tmp1240506051,
Input(s):
Failed to read data from "file:///home/hariprasad.sudo/3.json"
Output(s):
Failed to produce result in "file:/tmp/temp-65649055/tmp1240506051"
Job DAG:
job_local1664361500_0001
2015-07-23 13:40:09,181 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Failed!
2015-07-23 13:40:09,186 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1066: Unable to open iterator for alias a
Details at logfile: /home/hariprasad.sudo/pig_1437673203961.log
grunt> 2015-07-23 13:40:14,754 [communication thread] INFO org.apache.hadoop.mapred.LocalJobRunner - map > map
Please help me in understanding what is wrong.
Thanks,
Hari
Have the compact version of json in 3.json. We can use http://www.jsoneditoronline.org for the same.
3.json
{"id":6668,"source_name":"National Stock Exchange of India","source_code":"NSE"}
with this we are able to dump the data :
(6668,National Stock Exchange of India,NSE)
Ref : Error from Json Loader in Pig where similar issue is discussed.
Extract from the above ref. link :
Pig doesn't usually like "human readable" json. Get rid of the spaces and/or indentations, and you're good.