Member since
04-13-2016
422
Posts
150
Kudos Received
55
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1569 | 05-23-2018 05:29 AM | |
4500 | 05-08-2018 03:06 AM | |
1409 | 02-09-2018 02:22 AM | |
2359 | 01-24-2018 08:37 PM | |
5585 | 01-24-2018 05:43 PM |
03-17-2018
06:12 PM
@kanna k Try to set below parameters, hope it should work: add to ambari.properties: client.threadpool.size.max = 50 and then ambari-server restart
... View more
02-09-2018
02:22 AM
@PJ Yes, its the same even for Userid's but make sure that user doesn't belongs to any other groups. Even if he belongs the 1st policy will get higher priority. Hope this helps.
... View more
02-08-2018
03:43 AM
@Dhiraj Refer this article: http://www.michael-noll.com/blog/2011/04/09/benchmarking-and-stress-testing-an-hadoop-cluster-with-terasort-testdfsio-nnbench-mrbench/
... View more
01-31-2018
11:07 PM
@Carlton Patterson
In your table.hql, please copy all your query and past it. Then run below command, by passing your beeline connection and input as table.hql(which has your query) so that it will allow you to store your output in required format. The output format can anything from below formats. beeline -u 'jdbc:hive2://zookeeper1:2181,zookeeper2:2181,zookeeper3:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2'--outputformat=csv2 -f table.hql > /hadoop/hdfs/tableslist.csv --outputformat=[table/vertical/csv/tsv/dsv/csv2/tsv2] Format mode for result display. Default is table. See Separated-Value Output Formats below for description of recommended sv options. Usage: beeline --outputformat=tsv https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-OutputFormats Hope this helps you.
... View more
01-25-2018
08:40 PM
Hi Team, Is there any way I can define global variable in Zeppelin notebook in a paragraph which then could drive other paragraphs like how we do in jupyter notes for Spark/Python? If yes, can someone share sample example? I'm using Zeppelin Version 0.7.0 and Spark2. Any help is highly appreciated and thanks in advance.
... View more
Labels:
01-24-2018
08:37 PM
1 Kudo
@Sujatha Rudra You can export the blueprint of your current cluster as : http://erie1.example.com:8080/api/v1/clusters/ErieCluster?format=blueprint . Please see: https://cwiki.apache.org/confluence/display/AMBARI/Blueprints To export a blueprint from an existing cluster: curl -H "X-Requested-By: ambari"-X GET -u admin:admin http://c6401.ambari.apache.org:8080/api/v1/clusters/TestCluster?format=blueprint . Also get the blueprint registered: curl -H "X-Requested-By: ambari"-X GET -u admin:admin http://c6401.ambari.apache.org:8080/api/v1/blueprints Links https://community.hortonworks.com/questions/83437/ambari-export-blueprint.html https://hortonworks.com/blog/ambari-blueprints-delivers-missing-component-cluster-provisioning/ https://community.hortonworks.com/content/kbentry/47171/automate-hdp-installation-using-ambari-blueprints-1.html Hope this links helps.
... View more
01-24-2018
05:43 PM
2 Kudos
@laki cheli Please try to create the home directory for that particular user and run the command 'hdfs dfs -mkdir test'. In your case I can see you logged in as root account, please login as hdfs account and create root account HDFS home directory and then run your command as root user. Step 1: Login as hdfs account Step 2: hdfs dfs -mkdir -p /user/root/ Step 3: hdfs dfs chown root /user/root/ Step 4: Login back as root account again Step 5: Run your command 'hdfs dfs -mkdir test' Step 6: Now you can see test directory under root account using 'hdfs dfs -ls ' or hdfs dfs -ls /user/root/' Hope this helps you.
... View more
01-18-2018
04:59 AM
@karthik nedunchezhiyan Did you get chance to look this arictle https://community.hortonworks.com/articles/27225/how-qjm-works-in-namenode-ha.html
... View more
01-17-2018
10:53 PM
@Rodrigo Mendez I have just validated, if you set hive.server2.enable.doas=true. The end directory(hdfs://hdp_cluster/user/my_user/output_folder) will be created with user name who ever is running that job. Please check your Kerberos ticket for that user and proxy configurations. Hope this helps you.
... View more
01-16-2018
08:44 PM
@na I think thats expected behaaviour. For your scenario I would better suggest to go for DistCp between Snapshot Difference. distcp -update -diff -delete /source /destination How to Use This Feature To use this feature, you should first make sure all assumptions are met. Typical steps are described as follows:
Create snapshot s0 in the source directory. Issue a default distcp command that copies everything from s0 to the target directory (command line is like distcp -update <sourceDir>/.snapshot/s0 <targetDir> ). Create snapshot s0 in the target dir. Make some changes in the source dir. Create a new snapshot s1, and issue a distcp command like distcp -update -diff s0 s1 <sourceDir> <targetDir> to copy all changes between s0 and s1 to the target directory. Create a snapshot with the same name s1 in the target dir. Repeat steps 4 to 6 with a new snapshot name—for example, s2. Link Hope this helps you.
... View more