Member since
11-07-2016
637
Posts
253
Kudos Received
144
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2720 | 12-06-2018 12:25 PM | |
| 2861 | 11-27-2018 06:00 PM | |
| 2193 | 11-22-2018 03:42 PM | |
| 3567 | 11-20-2018 02:00 PM | |
| 6272 | 11-19-2018 03:24 PM |
11-10-2017
12:53 PM
1 Kudo
@Michael Bronson, You can use this API to get all the available services. http://{ambari-server}:{port}/api/v1/clusters/{clustername}/stack_versions/{stack-version-no}/repository_versions/{repository-version-no} Hit the above API and parse "RepositoryVersions -> services" array To check list of installed services you can use http://{ambari-server}:{port}/api/v1/clusters/{clustername}/services. Check the diff between two to find out the services which are not installed To install the service use curl -u <username>:<password> -i -X PUT -d '{"ServiceInfo": {"state" : "INSTALLED"}}' http://<ambari-server-host>:{port}/api/v1/clusters/<cluster-name>/services/<service-name>; Thanks, Aditya
... View more
11-10-2017
12:35 PM
@Michael Bronson, 1) First solution is to try changing the ownership of the directory and restart. If this works then no need to change anything. 2) If #1 doesn't work and you are ok to remove this volume, then remove the directory from "dfs.datanode.data.dir" and let the value of 'dfs.datanode.failed.volumes.tolerated' remain 0 3) If you do not want to remove this volume and you are okay with this failed volume and continue then set 'dfs.datanode.failed.volumes.tolerated' to 1
... View more
11-10-2017
12:13 PM
@Michael Bronson Check if you have write permissions for '/xxxx/sdc/hadoop/hdfs/data' . Change the ownership to hdfs:hadoop chown hdfs:hadoop /xxxx/sdc/hadoop/hdfs/data If you are okay with failed volumes ,then you can change 'dfs.datanode.failed.volumes.tolerated' to 1 or another solution is to remove the above directory(/xxxx/sdc/hadoop/hdfs/data) from 'dfs.datanode.data.dir' Thanks, Aditya
... View more
11-10-2017
08:33 AM
1 Kudo
@Gayathri Devi, Assuming your initial data to be and Index as row key which will be increasing as new data is inserted Index, value ---> col names
1,a
2,b
3,c You can run normal sqoop command which will import the complete data to destination. Now let's say you have added few more rows to the source and your input becomes like this Index, value ---> col names
1,a
2,b
3,c
4,d
5,e
6,f Now you can use sqoop incremental command to import the new columns . You can use "--incremental<mode> --check-column<column name> --last-value<last check column value>" ie "--incremental append --check-column Index --last-value 3". This command will only import the last 3 rows You can also do incremental import based on lastmodified value. https://sqoop.apache.org/docs/1.4.2/SqoopUserGuide.html#_incremental_imports Thanks, Aditya
... View more
11-10-2017
08:21 AM
@Anurag Mishra, Run your script as below sh myscript.sh > output.txt 2>&1 Thanks, Aditya
... View more
11-09-2017
03:10 PM
@Oleg Hmelnits, The default unit is bytes. However you can use -format option to view in human reabable format. Thanks, Aditya
... View more
11-09-2017
02:05 PM
@shin matsuura, Can you paste the content of /etc/yum.repos.d/ambari-hdp-1.repo Also, can you try running the below where hcat is installed yum clean all
yum install -y hive2_2_6_3_0_235 Thanks, Aditya
... View more
11-09-2017
02:35 AM
@Gayathri Devi, Please use the below query format --query "select * hgj where date(starttime)=2017-08-08 AND \$CONDITIONS" Thanks, Aditya
... View more
11-08-2017
02:04 PM
@Kevin Nguyen, This value controls the maximum no of resources that can be used to run Application master. Also controls the no of concurrent applications running. If this is very low , application master may not even start which will cause the app to be in ACCEPTED state. If it is very high, application master takes all the resources leaving the application only few resources.
... View more
11-08-2017
12:59 PM
@MONORANJAN MOHAPATRA, You can use approach suggested by @Slim assuming all the data is of fixed length. You can change it as below hive> CREATE TABLE foo (bar CHAR(11));
hive> insert into foo values ("00-00-8D-AC");
hive> select * from foo;
OK
00-00-8D-AC
Thanks, Aditya
... View more