Member since
02-02-2016
583
Posts
518
Kudos Received
98
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1348 | 09-16-2016 11:56 AM | |
706 | 09-13-2016 08:47 PM | |
2692 | 09-06-2016 11:00 AM | |
1448 | 08-05-2016 11:51 AM | |
2807 | 08-03-2016 02:58 PM |
06-13-2022
01:54 AM
@PriyalPotnis as this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post.
... View more
02-11-2021
06:22 AM
I think this was due to non runnning metstore service from hive. You should run command "hive --service metastore & " first and then start hive console.
... View more
09-30-2020
02:54 AM
In my case, source file gets removed, when I load a single file with 'OVERWRITE' clause. files stay when I load without 'OVERWRITE' clause for a set of files with a pattern (say _*.txt)
... View more
04-20-2020
09:47 AM
hdfs dfs -ls -R <directory> |grep part-r* |awk '{print $8}' |xargs hdfs dfs -cat | wc -l
... View more
02-19-2020
09:20 AM
Writing this so that it can help someone in future: I was installing Hive and getting error that It hive metastore wasn't able to connect, and I successfully resolved the error by recreating the hive metastore database. Someone the user which was created in mysql Hive metastore wasn't working properly and not able to authenticate. So I dropped metastore DB, Dropped User. Recreated Metastore DB, Recreated User, Granted all privileges and then it was working without issues.
... View more
01-08-2020
04:27 AM
You can also append HIVE_SKIP_SPARK_ASSEMBLY to the command which should remove the warnings. export HIVE_SKIP_SPARK_ASSEMBLY=true; hive -S --database dbname -e 'show tables;'
... View more
08-18-2019
09:42 PM
Hi @rushi_ns , yours might be completly different issue. Please create a new Question thread stating your issue.
... View more
06-02-2017
11:37 AM
1 Kudo
@Jitendra Yadav Can you please check which user has the permission to read write this file? # ls -la /etc/yum.repos.d/ambari.repo
-rw-r--r--. 1 root root 304 May 31 17:08 /etc/yum.repos.d/ambari.repo . You can verify the same using this Python script" import os
import pwd
print 'Who Owns it: ' + pwd.getpwuid(os.stat("/etc/yum.repos.d/ambari.repo").st_uid).pw_name . Now If do the following: # useradd 55025
# chown 55025 /etc/yum.repos.d/ambari.repo . Then you will see 55025 in the output.
... View more
05-30-2017
08:19 PM
@Constantin Stanca There was no error in hive/hdfs/yarn etc. during that time frame, even I tried to restart all the services but didn't helped. But when I restarted Ambari server the issue got resolved, don't know how this exception is related to ambari restart 🙂
... View more
09-21-2016
11:54 AM
Thanks @Ayub Pathan for the help, it worked with below syntax as per your suggestion. RULE:[1:$1](4[0-9]*)s/^4/d/
... View more
09-19-2016
12:39 PM
Correct, if the change is non-destructive, it will be considered for a maint. release. But I should point out that the community tries to minimize this though. Cheers.
... View more
09-13-2016
08:47 PM
1 Kudo
I found out the issue, actually we enabled the spnego for atlas therefore it was checking the spnego.service.keytab file. Somehow user "atlas" got removed from the application group therefore didn't had read permission on spnego.service.keytab file. Once we given read permission issue got resolved. Thanks
... View more
06-05-2018
05:46 PM
It worked for me. Thanks!
... View more
06-05-2017
05:23 PM
My ambari version is Version2.2.2.0 and HDP version is HDP-2.3.4.0-3485 bash-4.2$ yum list 'smartsense*'
Loaded plugins: product-id, search-disabled-repos, subscription-manager
HDP-2.3 175/175
HDP-UTILS-1.1.0.20 44/44
puppetlabs-deps 12/12
puppetlabs-products 66/66
rhel-7-server-eus-rpms 8407/8407
rhel-7-server-optional-rpms 7122/7122
rhel-7-server-rpms 8450/8450
rhel-7-server-thirdparty-oracle-java-rpms 170/170
rhel-rs-for-rhel-7-server-eus-rpms 162/162
usps-addons 6/6
Error: No matching Packages to list
bash-4.2$ cat /etc/yum.repos.d/ambari.repo
cat /etc/yum.repos.d/ambari.repo cat: /etc/yum.repos.d/ambari.repo: No such file or directory
... View more
06-23-2016
03:55 PM
Thanks @Aman Mundra Glad it worked 🙂 , please accept the answer to close this thread.
... View more
06-15-2016
09:39 AM
I didn't understand the difference between s3 and s3n. This link helped: http://stackoverflow.com/questions/10569455/difference-between-amazon-s3-and-s3n-in-hadoop Thanks again.
... View more
06-11-2016
08:46 PM
1 Kudo
Hello @Thiago. It is possible to achieve communication across secured and unsecured clusters. A common use case for this is using DistCp for transfer of data between clusters. As mentioned in other answers, the configuration property ipc.client.fallback-to-simple-auth-allowed=true tells a secured client that it may enter a fallback unsecured mode when the unsecured server side fails to satisfy authentication. However, I recommend not setting this in core-site.xml, and instead setting it on the command line invocation specifically for the DistCp command that needs to communicate with the unsecured cluster. Setting it in core-site.xml means that all RPC connections for any application are eligible for fallback to simple authentication. This potentially expands the attack surface for man-in-the-middle attacks. Here is an example of overriding the setting on the command line while running DistCp: hadoop distcp -D ipc.client.fallback-to-simple-auth-allowed=true hdfs://nn1:8020/foo/bar hdfs://nn2:8020/bar/foo The command must be run while logged into the secured cluster, not the unsecured cluster. This is adapted from one of my prior answers: https://community.hortonworks.com/questions/294/running-distcp-between-two-cluster-one-kerberized.html
... View more
10-17-2017
02:36 PM
I tried using the external table method but I run out of memory. My mongo collection (table2) has 10 million records (0.755 GB) and reading from it works. After the insert task fails I do a count on the native table (table1) and it contains 0 rows. My query looks like this: "INSERT INTO table1 SELECT * FROM table2", if I add "LIMIT 1000" it works, however I need to migrate the entire collection. I attached the output from beeline.
... View more
06-10-2016
07:25 AM
I have 3 Journal Noeds in my cluster, but they don't seem to fail.
... View more
06-09-2016
12:43 PM
Hello, It worked fine now, the first solution worked connected to the database, but when Sqoop was running Map Reduce, the same error occurred. This way all Map Reduce agents have this configuration. So the solution is configure the TrustStore parameter on Map Reduce Configuration inside Ambari The value: -Djavax.net.ssl.trustStore=/home/user/truststore.jks Thank you
... View more
06-07-2016
12:54 PM
@Jitendra Yadav - yes this has works thanks! This is how the process looks like when I run ps: root 17484 1 99 13:47 pts/0 00:00:59 /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.65-0.b17.el6_7.x86_64/bin/java -server -XX:NewRatio=3 -XX:+UseConcMarkSweepGC -XX:-UseGCOverheadLimit -XX:CMSInitiatingOccupancyFraction=60 -Dsun.zip.disableMemoryMapping=true -Xms512m -Xmx2048m -Djava.security.auth.login.config=/etc/ambari-server/conf/krb5JAASLogin.conf -Djava.security.krb5.conf=/etc/krb5.conf -Djavax.security.auth.useSubjectCredsOnly=false -Djava.library.path=/usr/hdp/current/hadoop-client/lib/native -cp /etc/ambari-server/conf:/usr/lib/ambari-server/*:/usr/share/java/postgresql-jdbc.jar org.apache.ambari.server.controller.AmbariServer It seems setting -Djava.library.path is the only thing required - I have subsequently remove the snappy link in /usr/lib/ambari-server/ and can confirm it still works.
... View more
12-06-2018
03:05 AM
I saw this error today. Apparently my hdfs file was in .csv but my table structure was in ORC.
... View more
06-06-2016
07:25 PM
You can try as below li <- read.table(textConnection(c), sep = ",");
... View more
07-01-2016
02:54 AM
I know this is the silly question but where can you find the location. I am not able to trace back @srai
... View more
06-02-2016
08:55 PM
3 Kudos
@Paul Schuliger - Are you sure you are trying with port 2222? Please have a look at - https://github.com/hortonworks/tutorials/blob/hdp/tutorials/hortonworks/learning-the-ropes-of-the-hortonworks-sandbox/tutorial.md
... View more
06-15-2016
07:46 AM
The Optimized Row Columnar (ORC) file format provides a highly efficient way to store Hive data. It just like a File to store group of rows called stripes, along with auxiliary information in a file footer. It just a storage format, nothing to do with ORC/Spark.
... View more
06-01-2016
09:15 AM
@Roberto Sancho Great.!! :), please accept the answer which helped you to close this thread.
... View more
05-26-2016
03:45 PM
Hi @Mamta Chawla, Please feel free to accept an answer which helped you, so that this thread can be closed. Thanks
... View more
05-27-2017
09:28 AM
if my hive table is a external table located on hdfs, could this solution work? thanks , if my hive table is a external table ,could this solution work?
... View more