Member since
01-19-2017
3681
Posts
633
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1627 | 06-04-2025 11:36 PM | |
| 2086 | 03-23-2025 05:23 AM | |
| 989 | 03-17-2025 10:18 AM | |
| 3766 | 03-05-2025 01:34 PM | |
| 2591 | 03-03-2025 01:09 PM |
12-19-2019
09:17 AM
1 Kudo
@pnkalyan Did you ever use it before? The root/hadoop is the initial password on the first login with the root/hadoop combination you are forced to change the password so my suspicion is you initially did change it. For reference see: Learning the ropes od the HDP sandbox Hope that helps
... View more
12-17-2019
05:48 AM
1 Kudo
@Bindal Do the following steps sandbox-hdp login: root root@sandbox-hdp.hortonworks.com's password: ..... [root@sandbox-hdp~] mkdir -p /tmp/data [root@sandbox-hdp~]cd /tmp/data Now here you should be in /tmp/data to validate that do [root@sandbox-hdp~]pwd copy your riskfactor1.csv to this directory using some tool win Winscp or Mobaxterm see my screenshot using winscp My question is where is riskfactor1.csv file located? If that's not clear you can upload using the ambari view first navigate to /bindal/data and then select the upload please see attached screenshot to upload the file from your laptop. After the successful upload, you can run your Zeppelin job and keep me posted
... View more
12-16-2019
02:32 PM
@RobertCare You will need to run the Ranger AD user sync see good document here https://community.cloudera.com/t5/Community-Articles/Configuring-Ranger-Usersync-with-AD-LDAP-for-a-common/ta-p/245959 To test-run loading User and Group data into Ranger before committing to the changes: Set ranger.usersync.policymanager.mockrun=true. This parameter can be found in Ambari> Ranger> Configs> Advanced> Advanced ranger-ugsync-site. View the Users and Groups that will be loaded into Ranger: tail -f /var/log/ranger/usersync/usersync.log. After confirming that the users and groups are retrieved as intended, set ranger.usersync.policymanager.mockrun=false and restart Ranger Usersync. This will sync the users shown in the usersync log to the Ranger database. HTH
... View more
12-16-2019
05:53 AM
@Bindal Thanks for sharing the screenshot. I can see from the screenshot your riskfactor and riskfactor1 are directories !! not files Can you double click on either of them and see the contents. I have mounted an old HDP 2.6.x for illustration whatever filesystem you see under Ambari view is in HDFS. Here is the local filesystem My Ambari view before the creation of the /Bindal/data the equivalent to /tmp/data I created a directory in hdfs Make the directory this is the local fine system Copy the riskfactor1.csv from local file system /tmp/data Check the copied file in hdfs So a walk through from the Linux CLI as root user I created a directory in /tmp/data and placed the riskfactor1.csv in there then create a directory in HDFS /Bindal/data/. I then copied the file from the local Linux boy to HDFS , I hope that explains the difference between local filesystem and hdfs. Below is again a screenshot to show the difference Once the file is in HDFS your zeppelin should run successfully, as reiterated in your screenshot you share you need to double click on riskfactor and riskfactor1 which are directory to see if the difference with my screenshots HTH
... View more
12-11-2019
10:50 AM
@Bindal Spark expects the riskfactor1.csv file to be in hdfs path /tmp/data/ but to me it seems you have the file riskfactor1.csv on your local filesystem /tmp/data I have run the below from a sandbox Please follow the below steps to resolve that "Path does not exist" error. Log on the CLI on your sandbox as user root then Switch user to hdfs [root@sandbox-hdp ~]# su - hdfs Check the current hdfs directory [hdfs@sandbox-hdp ~]$ hdfs dfs -ls / Found 13 items drwxrwxrwt+ - yarn hadoop 0 2019-10-01 18:34 /app-logs drwxr-xr-x+ - hdfs hdfs 0 2018-11-29 19:01 /apps drwxr-xr-x+ - yarn hadoop 0 2018-11-29 17:25 /ats drwxr-xr-x+ - hdfs hdfs 0 2018-11-29 17:26 /atsv2 drwxr-xr-x+ - hdfs hdfs 0 2018-11-29 17:26 /hdp drwx------+ - livy hdfs 0 2018-11-29 17:55 /livy2-recovery drwxr-xr-x+ - mapred hdfs 0 2018-11-29 17:26 /mapred drwxrwxrwx+ - mapred hadoop 0 2018-11-29 17:26 /mr-history drwxr-xr-x+ - hdfs hdfs 0 2018-11-29 18:54 /ranger drwxrwxrwx+ - spark hadoop 0 2019-11-24 22:41 /spark2-history drwxrwxrwx+ - hdfs hdfs 0 2018-11-29 19:01 /tmp drwxr-xr-x+ - hdfs hdfs 0 2019-09-21 13:32 /user Create the directory in hdfs usually under /user/xxxx depending on the user but here we are creating a directory /tmp/data and giving an open permission 777 so any user can execute the spark Create directory in hdfs $ hdfs dfs -mkdir -p /tmp/data/ Change permissions $ hdfs dfs -chmod 777 /tmp/data/ Now copy the riskfactor1.csv in the local filesystem to hdfs, here I am assuming the file is in /tmp [hdfs@sandbox-hdp tmp]$ hdfs dfs -copyFromLocal /tmp/riskfactor1.csv /tmp/data The above copies the riskfactor1.csv from local temp to hdfs location /tmp/data you can validate by running the below command [hdfs@sandbox-hdp ]$ hdfs dfs -ls /tmp/data Found 1 items -rw-r--r-- 1 hdfs hdfs 0 2019-12-11 18:40 /tmp/data/riskfactor1.csv Now you can run your spark in zeppelin it should succeed. Please revert !
... View more
12-09-2019
05:41 PM
@saivenkatg55 Please don't forget to vote a helpful answer and accept the best answer. If you found this answer addressed your initial question, please take a moment to login and click "accept" on the answer.
... View more
12-04-2019
04:18 AM
@RobertCare Nothing stupid 🙂 The credentials are tricky and documented here the learning the Ropes of the HDP Sandbox See the below screenshots the atlas user& password is holger_gov/holger_gov Atlas user & Passowrd Explanation Hope that helps
... View more
12-02-2019
05:15 PM
@SushantRao Just tried now Access Restricted You must be a CDP Data Center customer to access these downloads. If you believe you should have this entitlement then please reach out to support or your customer service representative.
... View more
12-02-2019
02:06 PM
@mike_bronson7 You can change the ownership of the HDFS directory to airflow:hadoop please do run the -chown command on / ??? It should something like /users/airflow/xxx Please let me know
... View more
12-02-2019
02:25 AM
@mike_bronson7 The hadoop group encapsulates all the users including hdfs You do run a # cat /etc/group You should see someing like like hadoop:x:1007:yarn-ats,hive,storm,infra-solr,zookeeper,oozie,atlas,ams,ranger,tez,zeppelin,kms,accumulo,livy,druid,spark,ambari-qa,kafka,hdfs,s qoop,yarn,mapred,hbase,knox So running the -chown should only target the directory in the Diagnostics logs NEVER run the -chown command on / which is the root directory !! Can you share your log please
... View more