Member since
10-01-2015
3933
Posts
1150
Kudos Received
374
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3368 | 05-03-2017 05:13 PM | |
2800 | 05-02-2017 08:38 AM | |
3079 | 05-02-2017 08:13 AM | |
3007 | 04-10-2017 10:51 PM | |
1523 | 03-28-2017 02:27 AM |
12-16-2016
04:08 PM
@Praveen PentaReddy that's interesting, do you have another environment to test your script on? Like I said I have no issues on my end. The answer to your question about set command, I answered that in the following thread https://community.hortonworks.com/questions/1954/hcatbin-is-not-defined-define-it-to-be-your-hcat-s.html If my answer was useful and resolved your issue, please accept it as best.
... View more
12-16-2016
02:27 PM
10 Kudos
For the Devops crowd, you can take it a step further and automate provisioning of Ambari server. For that consider using community contributions like https://supermarket.chef.io/cookbooks/ambari if you were to use Chef, for example. For an exhaustive tour of the REST API, consult the docs https://github.com/apache/ambari/blob/trunk/ambari-views/docs/index.md This recipe assumes you have an unsecured HDP cluster with Namenode HA. Tested on Ambari 2.4.2. # list all available views for the current version of Ambari curl --user admin:admin -i -H 'X-Requested-By: ambari' -X GET http://localhost:8080/api/v1/views/ # get Files View only curl --user admin:admin -i -H 'X-Requested-By: ambari' -X GET http://localhost:8080/api/v1/views/FILES # get all versions of Files View for the current Ambari release curl --user admin:admin -i -H 'X-Requested-By: ambari' -X GET http://localhost:8080/api/v1/views/FILES/versions # get specific version of FILES view curl --user admin:admin -i -H 'X-Requested-By: ambari' -X GET http://localhost:8080/api/v1/views/FILES/versions/1.0.0 # create an instance of FILES view curl --user admin:admin -i -H 'X-Requested-By: ambari' -X POST http://localhost:8080/api/v1/views/FILES/versions/1.0.0/instances/FILES_NEW_INSTANCE # delete an instance of FILES view curl --user admin:admin -i -H 'X-Requested-By: ambari' -X DELETE http://localhost:8080/api/v1/views/FILES/versions/1.0.0/instances/FILES_NEW_INSTANCE # get specific instance of FILES view curl --user admin:admin -i -H 'X-Requested-By: ambari' -X GET http://localhost:8080/api/v1/views/FILES/versions/1.0.0/instances/FILES_NEW_INSTANCE # create a Files view instance with properties curl --user admin:admin -i -H 'X-Requested-By: ambari' -X POST http://localhost:8080/api/v1/views/FILES/versions/1.0.0/instances/FILES_NEW_INSTANCE \
--data '{
"ViewInstanceInfo" : {
"description" : "Files API",
"label" : "Files View",
"properties" : {
"webhdfs.client.failover.proxy.provider" : "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
"webhdfs.ha.namenode.http-address.nn1" : "u1201.ambari.apache.org:50070",
"webhdfs.ha.namenode.http-address.nn2" : "u1201.ambari.apache.org:50070",
"webhdfs.ha.namenode.https-address.nn1" : "u1201.ambari.apache.org:50470",
"webhdfs.ha.namenode.https-address.nn2" : "u1202.ambari.apache.org:50470",
"webhdfs.ha.namenode.rpc-address.nn1" : "u1201.ambari.apache.org:8020",
"webhdfs.ha.namenode.rpc-address.nn2" : "u1202.ambari.apache.org:8020",
"webhdfs.ha.namenodes.list" : "nn1,nn2",
"webhdfs.nameservices" : "hacluster",
"webhdfs.url" : "webhdfs://hacluster"
}
}
}' # create/update a Files view new/existing instance with new properties curl --user admin:admin -i -H 'X-Requested-By: ambari' -X PUT http://localhost:8080/api/v1/views/FILES/versions/1.0.0/instances/FILES_NEW_INSTANCE \
--data '{
"ViewInstanceInfo" : {
"description" : "Files API",
"label" : "Files View",
"properties" : {
"webhdfs.client.failover.proxy.provider" : "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
"webhdfs.ha.namenode.http-address.nn1" : "u1201.ambari.apache.org:50070",
"webhdfs.ha.namenode.http-address.nn2" : "u1201.ambari.apache.org:50070",
"webhdfs.ha.namenode.https-address.nn1" : "u1201.ambari.apache.org:50470",
"webhdfs.ha.namenode.https-address.nn2" : "u1202.ambari.apache.org:50470",
"webhdfs.ha.namenode.rpc-address.nn1" : "u1201.ambari.apache.org:8020",
"webhdfs.ha.namenode.rpc-address.nn2" : "u1202.ambari.apache.org:8020",
"webhdfs.ha.namenodes.list" : "nn1,nn2",
"webhdfs.nameservices" : "hacluster",
"webhdfs.url" : "webhdfs://hacluster"
}
}
}' # create instance of Hive view curl --user admin:admin -i -H 'X-Requested-By: ambari' -X POST http://localhost:8080/api/v1/views/HIVE/versions/1.0.0/instances/HIVE_NEW_INSTANCE \
--data '{
"ViewInstanceInfo" : {
"description" : "Hive View",
"label" : "Hive View",
"properties" : {
"webhdfs.client.failover.proxy.provider" : "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
"webhdfs.ha.namenode.http-address.nn1" : "u1201.ambari.apache.org:50070",
"webhdfs.ha.namenode.http-address.nn2" : "u1201.ambari.apache.org:50070",
"webhdfs.ha.namenode.https-address.nn1" : "u1201.ambari.apache.org:50470",
"webhdfs.ha.namenode.https-address.nn2" : "u1202.ambari.apache.org:50470",
"webhdfs.ha.namenode.rpc-address.nn1" : "u1201.ambari.apache.org:8020",
"webhdfs.ha.namenode.rpc-address.nn2" : "u1202.ambari.apache.org:8020",
"webhdfs.ha.namenodes.list" : "nn1,nn2",
"webhdfs.nameservices" : "hacluster",
"webhdfs.url" : "webhdfs://hacluster",
"hive.host" : "u1203.ambari.apache.org",
"hive.http.path" : "cliservice",
"hive.http.port" : "10001",
"hive.metastore.warehouse.dir" : "/apps/hive/warehouse",
"hive.port" : "10000",
"hive.transport.mode" : "binary",
"yarn.ats.url" : "http://u1202.ambari.apache.org:8188",
"yarn.resourcemanager.url" : "u1202.ambari.apache.org:8088"
}
}
}' # interact with a FILES view instance curl --user admin:admin -i -H 'X-Requested-By: ambari' -X GET http://localhost:8080/api/v1/views/FILES/versions/1.0.0/instances/FILES_NEW_INSTANCE/resources/files/fileops/listdir?path=%2F # once you create an instance, you can see its current properties curl --user admin:admin -i -H 'X-Requested-By: ambari' -X GET http://localhost:8080/api/v1/
iews/FILES/versions/1.0.0/ # output of previous command {
"href" : "http://localhost:8080/api/v1/views/FILES/versions/1.0.0/",
"ViewVersionInfo" : {
"archive" : "/var/lib/ambari-server/resources/views/work/FILES{1.0.0}",
"build_number" : "161",
"cluster_configurable" : true,
"description" : null,
"label" : "Files",
"masker_class" : null,
"max_ambari_version" : null,
"min_ambari_version" : "2.0.*",
"parameters" : [
{
"name" : "webhdfs.url",
"description" : "Enter the WebHDFS FileSystem URI. Typically this is the dfs.namenode.http-address\n
property in the hdfs-site.xml configuration. URL must be accessible from Ambari Server.",
"label" : "WebHDFS FileSystem URI",
"placeholder" : null,
"defaultValue" : null,
"clusterConfig" : "core-site/fs.defaultFS",
"required" : true,
"masked" : false
},
{
"name" : "webhdfs.nameservices",
"description" : "Comma-separated list of nameservices. Value of hdfs-site/dfs.nameservices property",
"label" : "Logical name of the NameNode cluster",
"placeholder" : null,
"defaultValue" : null,
"clusterConfig" : "hdfs-site/dfs.nameservices",
"required" : false,
"masked" : false
},
{
"name" : "webhdfs.ha.namenodes.list",
"description" : "Comma-separated list of namenodes for a given nameservice.\n Value of hdfs
-site/dfs.ha.namenodes.[nameservice] property",
"label" : "List of NameNodes",
"placeholder" : null,
"defaultValue" : null,
"clusterConfig" : "fake",
"required" : false,
"masked" : false
},
{
"name" : "webhdfs.ha.namenode.rpc-address.nn1",
"description" : "RPC address for first name node.\n Value of hdfs-site/dfs.namenode.rpc-add
ress.[nameservice].[namenode1] property",
"label" : "First NameNode RPC Address",
"placeholder" : null,
"defaultValue" : null,
"clusterConfig" : "fake",
"required" : false,
"masked" : false
},
{
"name" : "webhdfs.ha.namenode.rpc-address.nn2",
"description" : "RPC address for second name node.\n Value of hdfs-site/dfs.namenode.rpc-ad
dress.[nameservice].[namenode2] property",
"label" : "Second NameNode RPC Address",
"placeholder" : null,
"defaultValue" : null,
"clusterConfig" : "fake",
"required" : false,
"masked" : false
},
{
"name" : "webhdfs.ha.namenode.http-address.nn1",
"description" : "WebHDFS address for first name node.\n Value of hdfs-site/dfs.namenode.htt
p-address.[nameservice].[namenode1] property",
"label" : "First NameNode HTTP (WebHDFS) Address",
"placeholder" : null,
"defaultValue" : null,
"clusterConfig" : "fake",
"required" : false,
"masked" : false
},
{
"name" : "webhdfs.ha.namenode.http-address.nn2",
"description" : "WebHDFS address for second name node.\n Value of hdfs-site/dfs.namenode.ht
tp-address.[nameservice].[namenode2] property",
"label" : "Second NameNode HTTP (WebHDFS) Address",
"placeholder" : null,
"defaultValue" : null,
"clusterConfig" : "fake",
"required" : false,
"masked" : false
},
{
"name" : "webhdfs.client.failover.proxy.provider",
"description" : "The Java class that HDFS clients use to contact the Active NameNode\n Valu
e of hdfs-site/dfs.client.failover.proxy.provider.[nameservice] property",
"label" : "Failover Proxy Provider",
"placeholder" : null,
"defaultValue" : null,
"clusterConfig" : "fake",
"required" : false,
"masked" : false
},
{
"name" : "hdfs.auth_to_local",
"description" : "Auth to Local Configuration",
"label" : "Auth To Local",
"placeholder" : null,
"defaultValue" : null,
"clusterConfig" : "core-site/hadoop.security.auth_to_local",
"required" : false,
"masked" : false
},
{
"name" : "webhdfs.username",
"description" : "doAs for proxy user for HDFS. By default, uses the currently logged-in Ambari user.",
"label" : "WebHDFS Username",
"placeholder" : null,
"defaultValue" : "${username}",
"clusterConfig" : null,
"required" : false,
"masked" : false
},
{
"name" : "webhdfs.auth",
"description" : "Semicolon-separated authentication configs.",
"label" : "WebHDFS Authorization",
"placeholder" : "auth=SIMPLE",
"defaultValue" : null,
"clusterConfig" : null,
"required" : false,
"masked" : false
}
],
"status" : "DEPLOYED",
"status_detail" : "Deployed /var/lib/ambari-server/resources/views/work/FILES{1.0.0}.",
"system" : false,
"version" : "1.0.0",
"view_name" : "FILES"
},
"permissions" : [
{
"href" : "http://localhost:8080/api/v1/views/FILES/versions/1.0.0/permissions/4",
"PermissionInfo" : {
"permission_id" : 4,
"version" : "1.0.0",
"view_name" : "FILES"
}
}
],
"instances" : [
{
"href" : "http://localhost:8080/api/v1/views/FILES/versions/1.0.0/instances/Files",
"ViewInstanceInfo" : {
"instance_name" : "Files",
"version" : "1.0.0",
"view_name" : "FILES"
}
}
]
... View more
Labels:
12-16-2016
07:26 AM
@Praveen PentaReddy I think I understand what you mean now, you may have run into a bug. What version of Pig and HCatalog are you running? I just tested on Sandbox with HDP 2.5 and it works set hcat.bin /usr/bin/hcat;
a = load 'codes' using org.apache.hive.hcatalog.pig.HCatLoader();
b = foreach a generate $0 as code, $1 as description, $2 as total_emp, $3 as salary;
store b into 'codes' using org.apache.hive.hcatalog.pig.HCatStorer();
my codes table contained 823 rows, after this execution, it contains 1646 as expected per hcatalog wiki You can write to a non-partitioned table simply by using HCatStorer. The contents of the table will be overwritten: store z into 'web_data' using org.apache.hive.hcatalog.pig.HCatStorer(); https://cwiki.apache.org/confluence/display/Hive/HCatalog+LoadStore#HCatalogLoadStore-HCatStorer
... View more
12-16-2016
07:16 AM
then you need a temporary table to store the intermediary data. This is ugly but works, there's probably a better way but it's late where I am 🙂 set hcat.bin /usr/bin/hcat;
a = load 'codes' using org.apache.hive.hcatalog.pig.HCatLoader();
b = foreach a generate $0 as code, $1 as description, $2 as total_emp, $3 as salary;
sql drop table if exists codes_temp;
sql create table codes_temp(code string, description string, total_emp int, salary int);
store b into 'codes_temp' using org.apache.hive.hcatalog.pig.HCatStorer();
sql drop table if exists codes;
sql create table codes(code string, description string, total_emp int, salary int);
c = load 'codes_temp' using org.apache.hive.hcatalog.pig.HCatLoader();
d = foreach c generate $0 as code, $1 as description, $2 as total_emp, $3 as salary;
store d into 'codes' using org.apache.hive.hcatalog.pig.HCatStorer();
sql drop table if exists codes_temp;
... View more
12-16-2016
07:02 AM
@Mohana Murali Gurunathan I am not aware of any plans to backport this feature into 2.4 branch. We're deprecating 2.2 and 2.3 branches with the release of HDP 2.5.3. You will have to make a tough choice and upgrade at some point. Instead of going the unbeaten path, it's a safer bet to upgrade to 2.5.x and reap the benefits of these features. Otherwise, my guess would be to look at Ranger REST API and see if you can inject tags into current Ranger policies. https://cwiki.apache.org/confluence/display/RANGER/REST+APIs+for+Service+Definition,+Service+and+Policy+Management#RESTAPIsforServiceDefinition,ServiceandPolicyManagement-UpdatePolicybyid
... View more
12-16-2016
06:50 AM
3 Kudos
it's easier to drop a table and recreate, delete from a table and truncate are not supported by API set hcat.bin /usr/bin/hcat;
sql drop table if exists codes;
sql create table codes(code string, description string, total_emp int, salary int);
a = load 'sample_07' using org.apache.hive.hcatalog.pig.HCatLoader();
b = load 'sample_08' using org.apache.hive.hcatalog.pig.HCatLoader();
c = join b by code, a by code;d = foreach c generate $0 as code, $1 as description, $2 as total_emp, $3 as salary;
store d into 'codes' using org.apache.hive.hcatalog.pig.HCatStorer();
... View more
12-15-2016
10:27 PM
2 Kudos
@Mohana Murali Gurunathan Ranger tag based policies are not available with Ranger 0.5. It is a feature added with Ranger 0.6 https://cwiki.apache.org/confluence/display/RANGER/0.6+Release+-+Apache+Ranger https://cwiki.apache.org/confluence/display/RANGER/Tag+Based+Policies HDP 2.5 is the first release to include both Atlas and Ranger and includes ranger tagsync service that is basically the mechanism you're asking for.
... View more
12-15-2016
10:03 PM
@Dmitry Otblesk try the Ambari Views API to update properties, here's an example curl --user admin:admin -i -H 'X-Requested-By: ambari' -X PUT http://localhost:8080/api/v1/views/FILES/versions/1.0.0/instances/FILES_NEW_INSTANCE \
--data '{
"ViewInstanceInfo" : {
"description" : "Files API",
"label" : "Files View",
"properties" : {
"webhdfs.client.failover.proxy.provider" : "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
"webhdfs.ha.namenode.http-address.nn1" : "u1201.ambari.apache.org:50070",
"webhdfs.ha.namenode.http-address.nn2" : "u1201.ambari.apache.org:50070",
"webhdfs.ha.namenode.https-address.nn1" : "u1201.ambari.apache.org:50470",
"webhdfs.ha.namenode.https-address.nn2" : "u1202.ambari.apache.org:50470",
"webhdfs.ha.namenode.rpc-address.nn1" : "u1201.ambari.apache.org:8020",
"webhdfs.ha.namenode.rpc-address.nn2" : "u1202.ambari.apache.org:8020",
"webhdfs.ha.namenodes.list" : "nn1,nn2",
"webhdfs.nameservices" : "hacluster",
"webhdfs.url" : "webhdfs://hacluster"
}
}
}'
I have more examples published here https://github.com/dbist/ambari-chef/blob/master/notes to get a specific instance of a view and delete it # get specific instance of FILES view
curl --user admin:admin -i -H 'X-Requested-By: ambari' -X GET http://localhost:8080/api/v1/views/FILES/versions/1.0.0/instances/FILES_NEW_INSTANCE
# delete an instance of FILES view
curl --user admin:admin -i -H 'X-Requested-By: ambari' -X DELETE http://localhost:8080/api/v1/views/FILES/versions/1.0.0/instances/FILES_NEW_INSTANCE
... View more
12-13-2016
11:30 PM
There might be an Atlas Oozie hook enabled hence dependency on Oozie.
... View more
12-13-2016
11:29 PM
Apache Oozie requires a restart after an Atlas configuration update, but may not be included in the services marked as requiring restart in Ambari. Select Oozie > Service Actions > Restart All to restart Oozie along with the other services. http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_data-governance/content/ch_hdp_data_governance_install_atlas_ambari.html
... View more