Member since
05-30-2018
1322
Posts
715
Kudos Received
148
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 4045 | 08-20-2018 08:26 PM | |
| 1943 | 08-15-2018 01:59 PM | |
| 2372 | 08-13-2018 02:20 PM | |
| 4105 | 07-23-2018 04:37 PM | |
| 5010 | 07-19-2018 12:52 PM |
07-20-2016
03:16 AM
1 Kudo
@Greg Polanchyck some of the functionality you are looking for is available in WebHCat api. Take a look here. Description List the columns in an HCatalog table. URL http:// www.myserver.com /templeton/v1/ddl/database/ :db /table/ :table /column Parameters Name
Description
Required?
Default
:db The database name Required None
:table The table name Required None The standard parameters are also supported. Results Name
Description
columns A list of column names and types
database The database name
table The table name Example Curl Command % curl -s 'http://localhost:50111/templeton/v1/ddl/database/default/table/my_table/column?user.name=ctdean'
JSON Output
{
"columns": [
{
"name": "id",
"type": "bigint"
},
{
"name": "user",
"comment": "The user name",
"type": "string"
},
{
"name": "my_p",
"type": "string"
},
{
"name": "my_q",
"type": "string"
}
],
"database": "default",
"table": "my_table"
}
... View more
07-20-2016
03:11 AM
@payal patel Do you have any tables in hive? can you do a quick test and check if you are able to query those tables? if so can you try and insert into one of those?
... View more
07-20-2016
03:08 AM
@sankar rao can you add here what you see in the log file?
... View more
07-20-2016
02:50 AM
1 Kudo
@Kuldeep Kulkarni each service configuration can be updated by using API. Take a look here. You can also make changes in the blueprint. Configuration update involves the following steps:
Identify the config type to update and note the latest version applied. When a config type is updated the whole property set needs to be updated. So copying the values from the latest and updating specific values or adding/removing properties as needed is the easiest option. Read the cluster resource and note the version of the type you want to update Read the config type with the tag and note the properties Edit the properties as needed and then update the config type
Config update requires creation of a new version (typically current timestamp is a good choice) The new version of the config type must be added and applied to the cluster Restart affected services/components to have the config take effect You can use APIs or a wrapper script (/var/lib/ambari-server/resources/scripts/configs.sh) to edit configurations. Edit configuration using APIs (1.4.1/1.2.5) Verified against releases 1.4.1/1.2.5 Starting 1.4.2/1.4.3 you will have to add -H option to the curl calls. E.g. -H "X-Requested-By: ambari" 1. Find the latest version of the config type that you need to update. curl -u admin:admin -X GET http: //AMBARI_SERVER_HOST:8080/api/v1/clusters/CLUSTER_NAME?fields=Clusters/desired_configs
Sample OUTPUT
{
"href" : "http://AMBARI_SERVER_HOST:8080/api/v1/clusters/CLUSTER_NAME?fields=Clusters/desired_configs" ,
"Clusters" : {
"cluster_name" : "CLUSTER_NAME" ,
"version" : "HDP-2.0.6" ,
"desired_configs" : {
...
"mapred-site" : {
"user" : "admin" ,
"tag" : "version1384716039631"
}
...
}
}
}
2. Read the config type with correct tag curl -u admin:admin "http://AMBARI_SERVER_HOST:8080/api/v1/clusters/CLUSTER_NAME/configurations?type=mapred-site&tag=version1384716039631"
Sample OUTPUT
{
"href" : "http://AMBARI_SERVER_HOST:8080/api/v1/clusters/CLUSTER_NAME/configurations?type=mapred-site&tag=version1384716039631" ,
"items" : [
{
"href" : "http://AMBARI_SERVER_HOST:8080/api/v1/clusters/CLUSTER_NAME/configurations?type=mapred-site&tag=version1384716039631" ,
"tag" : "version1384716039631" ,
"type" : "mapred-site" ,
"Config" : {
"cluster_name" : "CLUSTER_NAME"
},
"properties" : {
... THESE ARE THE PROPERTY KEY-VALUE PAIRS ...
}
}]
}
3a. Save a new version of the config and apply it (see 3b for doing it using one call) curl --user admin:admin -i -X POST -d '{"type": "mapred-site", "tag": "version1384716041120", "properties" : {"mapreduce.admin.map.child.java.opts" : "-Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN",...}}' http: //AMBARI_SERVER_HOST:8080/api/v1/clusters/CLUSTER_NAME/configurations
curl --user admin:admin -i -X PUT -d '{"Clusters":{"desired_config" : {"type": "mapred-site", "tag": "version1384716041120"}}}' http: //AMBARI_SERVER_HOST:8080/api/v1/clusters/CLUSTER_NAME
3b. Save a new version of the config and apply it using one call curl --user admin:admin -i -X PUT -d '{"Clusters":{"desired_config" : {"type": "mapred-site", "tag": "version1384716041120", "properties" : {...}}}}' http: //AMBARI_SERVER_HOST:8080/api/v1/clusters/CLUSTER_NAME
4. Restart all components or services to have the config change take effect
E.g. Stop and Start a service curl --user admin:admin -i -X PUT -d '{"RequestInfo": {"context": "Stop HDFS"}, "ServiceInfo": {"state": "INSTALLED"}}' http: //AMBARI_SERVER_HOST:8080/api/v1/clusters/CLUSTER_NAME/services/HDFS
curl --user admin:admin -i -X PUT -d '{"RequestInfo": {"context": "Start HDFS"}, "ServiceInfo": {"state": "STARTED"}}' http: //AMBARI_SERVER_HOST:8080/api/v1/clusters/CLUSTER_NAME/services/HDFS
... View more
07-20-2016
02:44 AM
@sujitha sanku Here are some thought. Your right data in HDFS is immutable; however, with hive acid and phoenix/hbase you are able to update data. There are internal workings without those products which allow to update data. However at the core data exist in hdfs is not truly updated. It gives the perception. Hence why there is such thing as major/minor compaction. Not going to go into too much detail on that. So if data is updated in hbase, you can use NiFi to detect when a record is changed and based on that create a alert. As for hive/acid I am not aware of similar functionality. However products at attunity have functionality for CDC on hadoop. I would reach out to them. if that is not possible them you can build functionality to do some change tracking. It would be a custom solution. again that is for hive.
... View more
07-20-2016
02:30 AM
@david serafini what version of SUSE are you using?
... View more
07-19-2016
07:18 PM
@Gopichand Mummineni my understand it has to be run as oozie and not foo@*. @Benjamin Leonhardi please confirm or correct this understanding.
... View more
07-19-2016
07:14 PM
1 Kudo
@Alvin Jin I recently found out through this post the options:
Restful API (preferred method) more on api here. SAP hann jdbc connect. more here. @Randy Gelhausen pointed out a very important issue connecting via jdbc. "one thing to understand here is that traditionally going direct to SAP tables is a no-no. There's a lot of relational modeling that using the SAP APIs does for you behind the scenes. Just depends on what you need." That being said I would recommend Restful api if possible.
... View more
07-19-2016
07:08 PM
@Gopichand Mummineni this is how I would do it. I got much of this from @Benjamin Leonhardi feedback Shell Action
This options requires client to be installed on all nodes Store Keytabs on HDFS
Secure via Ranger/ACL/Chmod Use file tab to identify hdfs keytab location
When oozie shell action runs it will download to local yarn directory K-init inside shell script
... View more
07-19-2016
03:57 AM
@rdoktorics do you have example of how to do this? without recipe I am having difficult understanding how to add ranger metastore (DB) which was performed via recipe.
... View more