Member since
04-12-2019
105
Posts
3
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
695 | 05-28-2019 07:41 AM | |
423 | 05-28-2019 06:49 AM | |
395 | 12-20-2018 10:54 AM | |
296 | 06-27-2018 09:05 AM | |
1398 | 06-27-2018 09:02 AM |
06-15-2020
05:34 AM
@sattar As this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also provide the opportunity to provide details specific to your environment that could aid others in providing a more accurate answer to your question.
... View more
10-30-2019
03:30 AM
Hi, To understand what the Yarn application is doing, check the application logs of the particular yarn application and if the job hasnot completed , Also check for the Resource manager logs if it was stuck with any errors. Thanks Arun
... View more
10-11-2019
03:03 AM
1 Kudo
On Spark 2 and HDP 3.x . Edit file "/usr/hdp/3.1.4.0-315/spark2/conf/hive-site.xml" . Remove property below: "<property> <name>metastore.catalog.default</name> <value>spark</value> </property>" After i show all databases: scala> val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc); warning: there was one deprecation warning; re-run with -deprecation for details sqlContext: org.apache.spark.sql.hive.HiveContext = org.apache.spark.sql.hive.HiveContext@4e6881e scala> sqlContext.sql("show databases").show(); Hive Session ID = edac02b0-c2f7-4cd9-919d-97bff977be3b +------------------+ | databaseName| +------------------+ | default| |information_schema| | sys| | toy_store| | website| +------------------+
... View more
10-06-2019
06:56 AM
Hi, I'm facing same issue. Have there any luck?
... View more
09-25-2019
03:47 AM
Hi Vinay, Can you search sessionid "0x26c4d39814f4b8e" in ZK server log to see if there any clue to see why the session got closed? Thanks Eric
... View more
08-08-2019
03:36 PM
Hey Harsh Thanks for responding. As multiple client are requesting data to hbase, at some point, sometimes user don’t get data, EOF exception or connection interruptions occur. We are not able to track the record of requested data and size of input and output data sending to end user. Regards Vinay K
... View more
07-04-2019
09:41 AM
Hi All, Our oozie workflow jobs are getting suspended. After search in logs i have found some error in logs Error in oozie logs: 2019-07-01 14:53:34,740 WARN ipc.Server - IPC Server handler 3 on 10020, call org.apache.hadoop.mapreduce.v2.api.HSClientProtocolPB.getJobReport from 172.20.100.146:58166 Call#137071 Retry#0
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Could not load history file /mr-history/tmp/sshuser/job_1561807385110_19761-1562133916809-sshuser-oozie%3Alauncher%3AT%3Djava%3AW%3DE%2DAgg-1562133931559-1-0-SUCCEEDED-default-1562133922065.jhist Error in History server: 2019-07-03 06:05:34,723 INFO hs.HistoryFileManager - Moving /mr-history/tmp/sshuser/job_1561807385110_19761-1562133916809-sshuser-oozie%3Alauncher%3AT%3Djav
a%3AW%3DE%2DAgg-1562133931559-1-0-SUCCEEDED-default-1562133922065.jhist to /mr-history/done/2019/07/03/000019/job_1561807385110_19761-1562133916809-sshuser-oozie%3Alauncher%3AT%3Djava%3AW%3DE%2DAgg-1562133931559-1-0-SUCCEEDED-default-1562133922065.jhist
2019-07-03 06:05:34,780 ERROR hs.HistoryFileManager - Error while trying to move a job to done
java.io.IOException: rename from /mr-history/tmp/sshuser/job_1561807385110_19761-1562133916809-sshuser-oozie%3Alauncher%3AT%3Djava%3AW%3DE%2DAgg-1562133931559-1-0-SUCCEEDED-default-1562133922065.jhist to /mr-history/done/2019/07/03/000019/job_1561807385110_19761-1562133916809-sshuser-oozie%3Alauncher%3AT
%3Djava%3AW%3DE%2DAgg-1562133931559-1-0-SUCCEEDED-default-1562133922065.jhist failed.
at org.apache.hadoop.fs.FileSystem.rename(FileSystem.java:1342)
at org.apache.hadoop.fs.DelegateToFileSystem.renameInternal(DelegateToFileSystem.java:197)
at org.apache.hadoop.fs.AbstractFileSystem.renameInternal(AbstractFileSystem.java:749)
at org.apache.hadoop.fs.AbstractFileSystem.rename(AbstractFileSystem.java:679)
at org.apache.hadoop.fs.FileContext.rename(FileContext.java:960)
at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.moveToDoneNow(HistoryFileManager.java:1022)
at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.access$1000(HistoryFileManager.java:82)
at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager$HistoryFileInfo.moveToDone(HistoryFileManager.java:440)
at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager$1.run(HistoryFileManager.java:917)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748 Kindly help me to resolve the issue.
... View more
Labels:
07-02-2019
04:51 PM
Hi All, We are getting oozie workflow suspended automatically, as mentioned below User : sshuser
Group : -
Created : 2019-07-01 14:50 GMT
Started : 2019-07-01 14:50 GMT
Last Modified : 2019-07-02 04:06 GMT
Ended : 2019-07-02 04:06 GMT
CoordAction ID: 0173513-190628063928782-oozie-oozi-C@174
Actions
------------------------------------------------------------------------------------------------------------------------------------
ID Status Ext ID Ext Status Err Code
------------------------------------------------------------------------------------------------------------------------------------
0178959-190629112435711-oozie-oozi-W@:start: OK - OK -
------------------------------------------------------------------------------------------------------------------------------------
0178959-190629112435711-oozie-oozi-W@virtual KILLED 0178960-190629112435711-oozie-oozi-WSUSPENDED After dig into logs, i have found the error: 2019-07-01 14:53:35,067 WARN ActionCheckXCommand:523 - SERVER[hn1.cloudapp.net] USER[sshuser] GROUP[-] TOKEN[] APP[E_virtualWf] JOB[0178960-190629112435711-oozie-oozi-W] ACTION[0178960-190629112435711-oozie-oozi-W@hbaseDelete] Exception while executing check(). Error Code [JA009], Message[JA009: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Could not load history file wasbs://hbase@hdpsa.blob.core.windows.net/mr-history/tmp/sshuser/job_1561807385110_11674-1561992748186-sshuser-oozie%3Alauncher%3AT%3Djava%3AW%3DE_virtualWf-1561992812433-1-0-SUCCEEDED-default-1561992804743.jhist I have checked the permission rwxrwxrwx - /mr-history and rwxrwxrwxt - /tmp Sometime job working fine and sometime i'm getting above issue. Could someone help me to find the solution? Will be thankful to you.
... View more
Labels:
06-27-2019
07:48 AM
Hi All, I'm getting memory allocation in worker logs while allocation memory to container 2019-06-27 07:19:42,879 INFO monitor.ContainersMonitorImpl - Memory usage of ProcessTree 2703 for container-id container_e35_1560508149155_60211_01_000339: -1B of 2 GB physical memory used; -1B of 4.2 GB virtual memory used
2019-06-27 07:19:42,948 INFO monitor.ContainersMonitorImpl - Memory usage of ProcessTree 2700 for container-id container_e35_1560508149155_60211_01_000417: -1B of 2 GB physical memory used; -1B of 4.2 GB virtual memory used Why the memory allocation goes in -1B? Kindly help me to resolve it Regards, Vinay K
... View more
Labels:
06-11-2019
08:46 AM
@Jay Kumar SenSharma Thanks for quick response. We can create custom alert which is fine. But i'm looking if we can create monitoring for hbase queries and hbase exceptions. Regards, Vinay K
... View more
06-06-2019
11:06 AM
Hi @Vinay , @Geoffrey Shelton Okot, Any updates or solutions for this problem ? Thank You
... View more
05-28-2019
11:30 PM
The above was originally posted in the Community Help Track. On Tue May 28 23:19 UTC 2019, a member of the HCC moderation staff moved it to the Data Ingestion & Streaming track. The Community Help Track is intended for questions about using the HCC site itself.
... View more
05-20-2019
03:57 AM
The above question was originally posted in the Community Help track. On Mon May 20 03:56 UTC 2019, the HCC moderation staff moved it to the Hadoop Core track. The Community Help Track is intended for questions about using the HCC site itself.
... View more
04-15-2019
01:45 PM
Hi Folks We are running hbase. We are using MultiRowRangeFilter in our java code for get the data for multi ranges. Below is snippet code: ranges.add(new RowRange(Bytes.toBytes(startRow), true, Bytes.toBytes(endRow), true));
Scan scan = new Scan();
MultiRowRangeFilter filter = new MultiRowRangeFilter(ranges);
FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ALL);
filterList.addFilter(filter);
scan.setFilter(filterList);
scan.setCaching(500);
results = table.getScanner(scan); For specific table, we are getting null data, while same code is running for other tables. If i'm check checking manually by hbase shell, data is available. Even through single scan data is available using same code. How could i debug the data or query on Hbase level? Could someone help me to sort out the issue? Regards, Vinay K
... View more
Labels:
04-12-2019
08:10 AM
Hi All, We have setup multiple node of HDP cluster. We are using HBase 1.1.2 in environment, we are having 20 regionserver and each RS has approx 100 regions, which all are running good. I want to track the query being executed on which region server. We're running query by java code using multirowRangeFilter. Will be appreciable for sort it out. Regards, Vinay K
... View more
Labels:
04-09-2019
01:52 PM
The shutdown will work when you add the tag of this attribute in the code so that they reader will read the tag and give the position to that based on the setting.buttetowing
... View more
01-21-2019
09:16 AM
3 Kudos
Hi @Vinay 1)Kindly check whether you have mentioned the path correctly. 2) Try running the query in the host where HS2 is running. If you get any permission error or No files found error kindly add this property "hive.users.in.admin.role=hive" in Custom hiveserver2-site via ambari and try running LOAD DATA query as hive user. I think this will work for you. Please accept this answer if you found it helpful.
... View more
01-11-2019
07:50 AM
Hi @Pulkit Bhardwaj I have gone through this link, from this majorly i understand the performance of 3-way replication vs EC. Still i didn't understand how data is storing in HDFS. If i have to store 1GB file in HDFS, Logically File size is divide into 1024MB/128MB = 8 blocks, So now how RS-6-3-1024k store data these 8 blocks? what is meaning of 6 data block in RS and how 3 parity will work? Is EC further divide 8 blocks into sub-blocks? Could anyone help me to understand the logic?
... View more
12-20-2018
10:54 AM
Hi I have resolved the issue. i have done all steps on that node where i was facing issue First i clean cache from /tmp to some temporary directory. Then move all keytabs from /etc/security/keytabs/ to other temporary directory and finally i restart ambari-agent where i was facing issue. Then tried regenerate keytabs which i got successfully. Now i have resumed the upgradation.
... View more
12-14-2018
01:00 PM
Hi All, I have successfully upgraded HDP 2.6.4 to 3.0.1. Now i'm testing and access cluster. While access the resourcemanager ui and history server ui, i'm getting 401 authorization required. Manually i have not configured spnego in cluster. so how would i access the RM UI?
... View more
12-12-2018
09:41 AM
Thanks @Geoffrey Shelton Okot Also i have found the problem. Whenever we install new HDP or upgrade HDP, we specify the repository path of HDP and HDP-UTILS in UI, accordingly ambari create repo on all agents with name of HDP-2.6-repo-51, HDP-UTILS-1.1.0.22-repo-51. But i had also created HDP and HDP-UTILS repository manually on all nodes and all HDP packages had installed with manually repository path. When i was starting services, hbase client and other client find HDP-2.6-repo-51 repository for install the client which i was not getting. Now i have disable the manually repository and reinstall the client package manually. It's working fine.
... View more
12-12-2018
10:45 AM
@VinayPlease login and accept the answer if you find this helpful. Thanks
... View more
10-11-2018
12:23 PM
Hi Team We are planning to upgrade our test cluster to new hadoop version 3. Before up-gradation, I'm looking the features of new version. As i found, app timeline server also upgraded to v.2 and apache hbase will be back-end supported storage for timeline server V.2. Does it mean we require high memory on specific node where new timeline server V.2 will install? because Apache HBASE is mainly memory driven service. Kindly assist me to understand the scenario.
... View more
- Tags:
- hadoop
- Hadoop Core
Labels:
07-20-2018
04:51 PM
1 Kudo
What component are you asking about? What are you trying to achieve? They typically call each other over combinations of separate protocols. - HDFS and YARN interact via RPC/IPC. - Ambari Server and Agents are over HTTP & REST. Ambari also needs JDBC connections to the backing database. - Hive, Hbase, and Spark can use Thrift Server. The Hive metastore uses JDBC. - Kafka has its own TCP protocol. I would suggest starting on a specific component for the use case(s) you want. Hadoop itself is only comprised of HDFS & YARN + MapReduce
... View more
07-19-2018
09:28 AM
@Vinay Please refer related question and see if it helps: https://community.hortonworks.com/questions/45945/how-to-run-oozie-shell-action-for-hive-queries-in.html
... View more
07-10-2018
11:40 AM
Hi All, We have 5 node cluster 1NN, 1 Secondary NN, 3 DN Kerberos and ranger installed and working on server. I'm able to open hive shell after taking kerberos ticket using kinit command and able to successful hive command that is doing manually. While doing automation, I'm running sqoop job by oozie workflow. sqoop job is importing data from RDBMS to hive. When i'm running script i found below error: /user/abc/dev/tables_list/demo_emp -
log4j:WARN No such property [maxFileSize] in org.apache.log4j.DailyRollingFileAppender.
Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:569)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1566)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:92)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:138)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:110)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3528)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3560)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:550)
... 8 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1564)
... 14 more
Caused by: MetaException(message:Could not connect to meta store using any of the URIs provided. Most recent failure: org.apache.thrift.transport.TTransportException: GSS initiate failed
at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316)
at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:487)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:282)
at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:76)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1564)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:92)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:138)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:110)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3528)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3560)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:550)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:534)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:282)
at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:76)
... 19 more Could someone suggest me?
... View more
Labels:
06-27-2018
09:05 AM
Well , i find two solution. I hope it is correct 1. Either we have to create user on OS where ResourceManager installed. Or 2. We have to configure ldap client example sssd which will integrate with AD server. Let me know if anyone have query.
... View more
06-27-2018
09:02 AM
Hi, I have configured the sssd with AD server. Now i'm able to run query. Thanks Let me know if anyone having any query.
... View more