Member since
09-18-2015
3274
Posts
1159
Kudos Received
426
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2627 | 11-01-2016 05:43 PM | |
| 8767 | 11-01-2016 05:36 PM | |
| 4943 | 07-01-2016 03:20 PM | |
| 8274 | 05-25-2016 11:36 AM | |
| 4439 | 05-24-2016 05:27 PM |
02-29-2016
04:48 AM
1 Kudo
@Vinti Maheshwari You can use this https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_installing_manually_book/content/determine-hdp-memory-config.html Also, Smartsense is must http://docs.hortonworks.com/HDPDocuments/SS1/SmartSense-1.2.0/index.html
... View more
02-29-2016
03:35 AM
@Prakash Punj Restart ranger and ambari
... View more
02-29-2016
03:06 AM
@Prakash Punj If you changed the Ranger UI password then you have to update 2nd part in the screenshot "admin_username" 1st part if if you change admin password for Ambari
... View more
02-29-2016
02:15 AM
@Prakash Punj Did you change your Ranger UI admin password? If you did then you have to update Ambari with new password
... View more
02-29-2016
01:44 AM
@Prakash Punj Did you get chance to test this?
... View more
02-29-2016
01:04 AM
@Mahesh Deshmukh Short answer is Yes and you can configure pig as you like Please see this tutorial http://hortonworks.com/hadoop-tutorial/faster-pig-tez/
... View more
02-29-2016
12:49 AM
2 Kudos
@Jan J See this http://spark.apache.org/sql/ You have various options to access strutured data.
... View more
02-29-2016
12:22 AM
Hi @Matt Davies See this blog http://blog.sequenceiq.com/blog/2014/11/17/datalake-cloudbreak-2/ You can set "hive.metastore.warehouse.dir": "s3://siq-hadoop/apps/hive/warehouse", Can you share more details from HS2 logs?
... View more
02-29-2016
12:20 AM
@Matt Davies Your question: Does anyone know how to make hive default to S3 so each table does not need to be external? Is this possible? Locally managed table to s3 ...See this Using S3 as the default FS HDP in theory can be setup to use S3 as the default filesystem (instead of HDFS). Detailed instructions on how to replace HDFS with S3 are given here. http://wiki.apache.org/hadoop/AmazonS3 At a high level, we have to set the “fs.defaultFS” property has to be set to point to S3 in core-site.xml The default setting for this property looks as below: <property> <name>fs.defaultFS</name> <value>hdfs://hadoopNamenode:8020</value> </property> Change it the below setting: <property> <name>fs.defaultFS</name> <value>s3://BUCKET</value> </property> In addition to setting the default FileSystem to be S3, we also have to provide the AWS access ID and AWS Secret Access Keys. Both these settings are shown below: <property> <name>fs.s3.awsAccessKeyId</name> <value>ID</value> </property> <property> <name>fs.s3.awsSecretAccessKey</name> <value>SECRET</value> </property> Hive Tables in S3 A Hive table that uses “S3” as storage can be created as below: CREATE TABLE SRC_TABLE ( COL1 string , COL2 string , COL3 string ) ROW FORMAT DELIMITED STORED AS TEXTFILE LOCATION 's3://BUCKET_NAME/user/root/src_table' ; The only difference here is that we specify the location of the table to be a sub-folder under “S3://BUCKET_NAME”. Data can be loaded into this table using the hive command: Hive: > load data local inpath “local_table.csv” into table SRC_TABLE; The path “ s3://BUCKET_NAME/user/root/src_table” can be treated as any in HDFS and can be used with Hive/Pig/MapReduce etc.
... View more