Member since
12-30-2015
164
Posts
29
Kudos Received
10
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 28777 | 01-07-2019 06:17 AM | |
| 1475 | 12-27-2018 07:28 AM | |
| 4443 | 11-26-2018 10:12 AM | |
| 1927 | 11-16-2018 12:15 PM | |
| 4162 | 10-22-2018 09:31 AM |
09-11-2018
12:57 PM
alter table schema_7539.activityparameters_4ITEM_1 COMPACT 'MAJOR'; in hive then re run query. If that does't work run query in spark hive context.
... View more
08-28-2018
10:58 AM
2 Kudos
Hi @subhash parise What did you set the ownership to for the version-2 folder and contents? It should be zookeeper:hadoop and zookeeper (owner) should have write permissions. Did you also check that you can traverse the folder structure, as the zookeeper user? Ex does this work; [root@host]# su - zookeeper
[zookeeper@host ~]$ cd /data/hadoop/zookeeper/version-2/ [zookeeper@host version-2]$ ls -al
drwxr-xr-x. 2 zookeeper hadoop 4096 Aug 27 08:03 .
drwxr-xr-x. 3 zookeeper hadoop 4096 Aug 27 08:03 ..
-rw-r--r--. 1 zookeeper hadoop 1 Aug 27 08:03 acceptedEpoch
-rw-r--r--. 1 zookeeper hadoop 1 Aug 27 08:03 currentEpoch
-rw-r--r--. 1 zookeeper hadoop 67108880 Aug 28 10:52 log.100000001
-rw-r--r--. 1 zookeeper hadoop 296 Aug 27 08:03 snapshot.0
... View more
08-30-2018
11:19 AM
@yong lau I seen your other post, but the error was hard to find in the bulk paste. Try posting your error in a code box so that we can see it better. Also be sure to dial into the actual error versus sending needless text. That said, HIVE LLAP requires some configurations we outline above in this post. Check those out as well as the links above Make sure you have the settings for low specs and for a single LLAP container and try to start LLAP. Sometimes it takes me 2-3 times to start without errors. Once you have it working with low specs, slowly increase specs. It is also important to know that the actual errors you need to find, are likely inside of the YARN containers, so you will have to dig them out to truely know the issue that actually stops it from starting.
... View more
10-16-2017
11:48 AM
@subhash parise On the offending node, do the following, bizzare thee are no files in /var/lib/ambari-agent/data: Stop and remove ambari-agent ambari-agent stop
yum erase ambari-agent
rm -rf /var/lib/ambari-agent
rm -rf /var/run/ambari-agent
rm -rf /usr/lib/amrbari-agent
rm -rf /etc/ambari-agent
rm -rf /var/log/ambari-agent
rm -rf /usr/lib/python2.6/site-packages/ambari* Re-install the Ambari Agent yum install ambari-agent
vi /etc/ambari-agent/conf/ambari-agent.ini Change hostname to Ambari Server [server]
hostname={Ambari-server_host_FQDN}
url_port=8440
secured_url_port=8441
connect_retry_delay=10
max_reconnect_retry_delay=30 Restart the agent ambari-agent start That should resolve the issue
... View more
08-18-2016
07:26 AM
Hi @vshukla, i haven't noticed spark tagging. this question is related to enable multiple hive shell sessions in multiple server at single time.
... View more
08-17-2016
09:06 AM
After re-installing SolrCloud with the data on the bigger volume from the outset everything is working again. I'm not sure what the original problem was following moving the indexes. I would expect that the symlink would have worked.
... View more
08-11-2016
02:06 PM
@subhash parise I just posted an article demonstrating a very simple Pig + Hive example showing HDFS compression. https://community.hortonworks.com/content/kbentry/50921/using-pig-to-convert-uncompressed-data-to-compress.html
... View more
11-16-2016
08:37 AM
@Jayanta Das Thanks for the update.
... View more
11-25-2017
01:59 AM
what about postgresql ??
... View more
- « Previous
- Next »