Member since
09-15-2015
294
Posts
764
Kudos Received
81
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1597 | 07-27-2017 04:12 PM | |
4344 | 06-29-2017 10:50 PM | |
2027 | 06-21-2017 06:29 PM | |
2278 | 06-20-2017 06:22 PM | |
2068 | 06-16-2017 06:46 PM |
03-07-2017
06:52 AM
1 Kudo
@Jay SenSharma I tried editing the article again. The issue reproduces. Dont know what the issue is.
... View more
03-07-2017
06:34 AM
2 Kudos
Trying to edit an article I posted today, its throwing an Internal Error Is there an issue going on? Tried it three times, before posting this question.
... View more
03-07-2017
02:22 AM
10 Kudos
Prerequisite:
Create an Account in S3 and get the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY AWS Command Line:
For the AWS command line to work have AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY configured in ~/.aws/credentials. Something like:
[default] aws_access_key_id=$AWS_ACCESS_KEY_ID aws_secret_access_key=$AWS_SECRET_ACCESS_KEY You might also want to set the region and output in ~/.aws/config. Something like:
[default] region=us-west-2 output=json
Steps:
Create a bucket in S3. You can create it online on Amazon Console ( CreatingABucket.html ) or using the command line like: aws s3 mb $BUCKET_NAME Modify the below properties in core-site.xml:
fs.defaultFS to s3a://$BUCKET_NAME fs.s3a.access.key to $AWS_ACCESS_KEY_ID fs.s3a.secret.key to $AWS_SECRET_ACCESS_KEY fs.AbstractFileSystem.s3a.imp to org.apache.hadoop.fs.s3a.S3A (HADOOP-11262) You might also want to set the below property in tez-site.xml if you need to run some Example jobs:
tez.staging-dir to hdfs://$NN_HOST:8020/tmp/$user_name/staging (TEZ-3276) hive.exec.scratchdir to hdfs://$NN_HOST:8020/tmp/hive (For running Hive on Tez) Restart HDFS,YARN, MAPREDUCE2 You should now be able to use S3 as the Default FileSystem.
... View more
Labels:
03-07-2017
12:06 AM
12 Kudos
@NLAMA If you shutdown the Active Namenode, the Standby NameNode needs to become the Active NameNode, before it can serve any of you requests (which is the failover mechanism). So, once the ZFKC detects that the Standby Namenode needs to become the Active Namenode, it informs the Standby Namenode to start the services related to Active mode, which include updating the fsimage, rolling over the edit logs etc. This operation might take some time. Hence, the delay you mentioned. Killing of StandBy Namenode does not affect the Active Namenode, hence you don't see any delay in performing the hdfs operations. Hope this explains. Let me know if you have more doubts. Thanks!
... View more
03-04-2017
12:42 AM
6 Kudos
These might help: https://community.hortonworks.com/questions/39017/can-someone-point-me-to-a-good-tutorial-on-spark-s.html https://www.rittmanmead.com/blog/2017/01/getting-started-with-spark-streaming-with-python-and-kafka/
... View more
03-02-2017
06:08 PM
2 Kudos
@khadeer mhmd Did the above posts resolve your issue
... View more
03-02-2017
04:32 AM
3 Kudos
These are few posts which might help you: https://community.hortonworks.com/questions/75421/-bash-hdfs-command-not-found-hortonworks-hdp-25.html https://community.hortonworks.com/questions/76426/-bash-hadoop-command-not-found.html https://community.hortonworks.com/questions/58247/hdp-25-sandboxvm-commandsscripts-are-not-found.html
... View more
03-02-2017
12:54 AM
6 Kudos
@khadeer mhmd These might help: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_data-movement-and-integration/content/ch_mirroring_on-prem.html https://hortonworks.com/hadoop-tutorial/mirroring-datasets-between-hadoop-clusters-with-apache-falcon/
... View more
03-01-2017
05:06 AM
1 Kudo
hive> USE DB1; Now, commands such as SHOW TABLES, will list the tables in this database DB1
... View more
03-01-2017
04:27 AM
5 Kudos
Currently there is no such functionality which can stop users from creating new Notebooks. Confirmed with the latest unreleased version as well as the below Apache Zeppelin mailing thread talks about it: http://apache-zeppelin-users-incubating-mailing-list.75479.x6.nabble.com/Forbid-creating-new-notes-td3561.html Although, as mentioned by Artem above, we can restrict access to Notebooks.
... View more