Member since
10-28-2015
61
Posts
10
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1625 | 09-25-2017 11:22 PM | |
6034 | 09-22-2017 08:04 PM | |
5366 | 02-03-2017 09:28 PM | |
3792 | 05-10-2016 05:04 AM | |
1080 | 05-04-2016 08:22 PM |
05-30-2021
11:15 PM
After decommissioning, recommission the nodemanager and then start it again.
... View more
02-11-2021
06:22 AM
I think this was due to non runnning metstore service from hive. You should run command "hive --service metastore & " first and then start hive console.
... View more
09-29-2020
05:27 AM
Facing issues with region availability and it seems to be due to compactions. We are getting below exception when we try to access region org.apache.hadoop.hbase.NotServingRegionException: Region is not online But when we checked corresponding region server logs we can see lot of compactions happening on the table. Does table becomes unaccessible during compaction? Is there a way to reduce number of compactions through some setting?
... View more
01-08-2020
01:46 AM
You have to do this on the node where resource manager runs. on other nodes, this directory would be empty only. so this should be good.
... View more
07-07-2018
08:54 PM
In certain Apache Hadoop use cases we want to get the checksum of files stored in HDFS. This is specifically useful when we are moving data from/to hdfs to verify the file was transferred correctly. Earlier there was no easy way to compare that but starting Apache Hadoop 3.1 we can compare the checksums of a file stored in hdfs and a file stored locally. HDFS-13056 The default checksum algorithm for hdfs chunks is CRC32C. A client can override it by overriding dfs.checksum.type (can be either CRC32 or CRC32C). This is not a cryptographically strong checksum, however it can be used for quick comparison. When we run the checksum command (hdfs dfs -checksum) for a hdfs file it calculates MD5 of MD5 of checksums of individual chunks (each chunk is typically 512 bytes long). However this is not very useful for comparison with a local copy. Example For example, the below command computes the checksum of the file hadoop-common-2.7.3.2.6.3.0-SNAPSHOT.jar stored in HDFS: hdfs dfs -checksum /tmp/hadoop-common-2.7.3.2.6.3.0-SNAPSHOT.jar
/tmp/hadoop-common-2.7.3.2.6.3.0-SNAPSHOT.jar MD5-of-0MD5-of-512CRC32C 000002000000000000000000c16859d1d071c6b1ffc9c8557d4909f1 However this checksum is not easily comparable to that of a local copy. Instead we can calculate the CRC32C checksum of the whole file by adding -Ddfs.checksum.combine.mode=COMPOSITE_CRC to same command: bin/hdfs dfs -Ddfs.checksum.combine.mode=COMPOSITE_CRC -checksum /tmp/hadoop-common-2.7.3.2.6.3.0-SNAPSHOT.jar
/tmp/hadoop-common-2.7.3.2.6.3.0-SNAPSHOT.jar COMPOSITE-CRC32C 3799db55 Property dfs.checksum.combine.mode=COMPOSITE_CRC tells hdfs to calculate combined CRC of individual CRCs instead of calculating MD5-of-Md5-of-Crcs. It is important to note here that we can calculate checksum of type CRC32C or CRC32 for a hdfs file depending upon how it was originally written. For example we can't calculate CRC32 for file in above example as its chunks was originally written with CRC32C checksums. If we want to get CRC32 of above file we need to specify dfs.checksum.type as CRC32 while writing that file. hdfs dfs -Ddfs.checksum.type=CRC32 -put hadoop-common-2.7.3.2.6.3.0-SNAPSHOT.jar /tmp
hdfs dfs -checksum /tmp/hadoop-common-2.7.3.2.6.3.0-SNAPSHOT.jar
/tmp/hadoop-common-2.7.3.2.6.3.0-SNAPSHOT.jar MD5-of-0MD5-of-512CRC32 0000020000000000000000009f26e871c80d4cbd78b8d42897e5b364
hdfs dfs -Ddfs.checksum.combine.mode=COMPOSITE_CRC -checksum /tmp/hadoop-common-2.7.3.2.6.3.0-SNAPSHOT.jar
/tmp/hadoop-common-2.7.3.2.6.3.0-SNAPSHOT.jar COMPOSITE-CRC32 c1ddb422 This checksum can be easily compared to checksum of same file in local file system with the crc32 command. crc32 hadoop-common-2.7.3.2.6.3.0-SNAPSHOT.jar
c1ddb422
... View more
Labels:
08-31-2018
06:44 PM
Thanks for the update. Glad you were able to make it work. Thanks for the comments and sharing it with the community.
... View more
10-17-2017
04:47 PM
@John Carter, It will depend on kind of latency, processing, and data volume you will be handling. Both are different approaches. Sqoop as you know will run mapreduce jobs while Nifi use case will be on streaming side. Given right resources both will work.
... View more
02-10-2019
08:57 PM
I had the same problem happen more than once on different clusters from different sites and networks configurations (with or without proxy/gateway/knox, company network, wifi router, home ISP, public wifi...). With various results It concerned the display of the notes themselves and the welcome page also (sometimes only the notes, the welcome page was ok) Some browser settings block websockets for security reasons. So be aware of that. Sometimes, it can be a firewall issue. Last occurence happened to require a clearing of the outputs of the notes. I was working on a note including a spark job that took ages and the progress bar froze. I restarted zeppelin, Spark, Yarn, nothing seemed to fix it. The note seemed "lost", undisplayable, desperately blank. My fix (after a couple of hours trying to figure what the hell was causing that) was to Hover over the note in the welcome page, and click the rubber, "clear output"... and voila,I was able to open my note again. Note That this has to be done after a restart of the Zeppelin service. Just for note (no pun intented) when I explored the browser prior the fix the websocket kept trying to load content from the server but nothing came out.
... View more
05-04-2016
08:22 PM
@Bindu Nayidi You can edit corresponding log4j file. Ambari-> <Service> -> configs -> advanced log4j
... View more
07-20-2016
10:17 PM
Did you have to keep below as well?
conf.set("hadoop.security.authentication","kerberos"); conf.set("java.security.krb5.conf","/etc/krb5.conf"); UserGroupInformation.setConfiguration(conf); UserGroupInformation.loginUserFromKeytab("principal","keytab");
... View more