Member since 
    
	
		
		
		03-04-2019
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                59
            
            
                Posts
            
        
                24
            
            
                Kudos Received
            
        
                5
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 6306 | 07-26-2018 08:10 PM | |
| 8192 | 07-24-2018 09:49 PM | |
| 3929 | 10-08-2017 08:00 PM | |
| 3212 | 07-31-2017 03:17 PM | |
| 1124 | 12-05-2016 11:24 PM | 
			
    
	
		
		
		07-26-2018
	
		
		08:10 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Paul
 Lam
 I ran into the same issue in the past.  Basically you will need to stop the Amberi server, then use a hidden command to remove the mpack, then reinstall the proper HDF mpack.    Here are the steps I took to get it resolved:  step1: 
ambari-server stop   step2: 
ambari-server uninstall-mpack --mpack-name=hdf-ambari-mpack --verbose   step3:
ambari-server install-mpack --mpack=hdf-ambari-mpack-<version>.tar.gz --verbose   step4:
ambari-server start  Please accept the answer if it helped.  Thanks. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-26-2018
	
		
		07:27 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Gourav You can use the -n & -p options to specify the username and password. For instance:  beeline -u jdbc:hive2://localhost:10000/default -n username -p password  However, for security reasons, I would not supply it in plaintext as option '-p' does, but rather read a password from a permission-protected password file instead, For instance:  beeline -u jdbc:hive2://localhost:10000/default -n username -w password_file  Also, if you have over the wire encryption enabled, either SASL or SSL, you will have to use different commands.  For instance if SASL is enabled and depending on the qop setting, here is an example when qop is set to auth-conf - highest security:  beeline -u "jdbc:hive2:/://localhost:10000/default;sasl.qop=auth-conf" -n username -w password_file  Another example is when you have SSL enabled for the channel, here is a sample connection string:  beeline -u "jdbc:hive2:/://localhost:10000/default;ssl=true;sslTrustStore=/sample/security/keystores/hostJKStruststore.jks;trustStorePassword=password" -n username -w password_file  Hope that helps!  Please accept the answer if it answered your questions. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-26-2018
	
		
		05:59 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							@Paul NorrisGlad it helped.  If you look at the JSON contents, both Dominika & I are referring to the same blueprint. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-25-2018
	
		
		04:26 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi Michael,  As mentioned in that stackoverflow question, we will need to query the underlying RDBMS that Hive Metastore sits on.  For instance, in my demo environment, I use MySQL as the back Hive Metastore storage.  I will need to connect to that MySql host and query the Hive metadata.        For demonstration purpose, let's add some comments at both table and column level, table name 'airline_tweets' for instance.  Add a comment at the table level:  ALTER TABLE airline_tweets SET TBLPROPERTIES ('comment' = 'Airline Tweets');  Add a comment to column 'tweet_id'  ALTER TABLE airline_tweets CHANGE tweet_id tweet_id STRING COMMENT 'This is column tweet_id';  Now let's connect to the MySQL host as mentioned above, I'm connecting as the root user, you can also use user 'hive'.    [root@scregionm2 ~]# mysql -u root
mysql> use hive;
mysql> show tables;
+---------------------------+
| Tables_in_hive            |
+---------------------------+
| AUX_TABLE                 |
| BUCKETING_COLS            |
| CDS                       |
| COLUMNS_V2                |
| COMPACTION_QUEUE          |
| COMPLETED_COMPACTIONS     |
| COMPLETED_TXN_COMPONENTS  |
| DATABASE_PARAMS           |
| DBS                       |
| DB_PRIVS                  |
| DELEGATION_TOKENS         |
| FUNCS                     |
| FUNC_RU                   |
| GLOBAL_PRIVS              |
| HIVE_LOCKS                |
| IDXS                      |
| INDEX_PARAMS              |
| KEY_CONSTRAINTS           |
| MASTER_KEYS               |
| NEXT_COMPACTION_QUEUE_ID  |
| NEXT_LOCK_ID              |
| NEXT_TXN_ID               |
| NOTIFICATION_LOG          |
| NOTIFICATION_SEQUENCE     |
| NUCLEUS_TABLES            |
| PARTITIONS                |
| PARTITION_EVENTS          |
| PARTITION_KEYS            |
| PARTITION_KEY_VALS        |
| PARTITION_PARAMS          |
| PART_COL_PRIVS            |
| PART_COL_STATS            |
| PART_PRIVS                |
| ROLES                     |
| ROLE_MAP                  |
| SDS                       |
| SD_PARAMS                 |
| SEQUENCE_TABLE            |
| SERDES                    |
| SERDE_PARAMS              |
| SKEWED_COL_NAMES          |
| SKEWED_COL_VALUE_LOC_MAP  |
| SKEWED_STRING_LIST        |
| SKEWED_STRING_LIST_VALUES |
| SKEWED_VALUES             |
| SORT_COLS                 |
| TABLE_PARAMS              |
| TAB_COL_STATS             |
| TBLS                      |
| TBL_COL_PRIVS             |
| TBL_PRIVS                 |
| TXNS                      |
| TXN_COMPONENTS            |
| TYPES                     |
| TYPE_FIELDS               |
| VERSION                   |
| WRITE_SET                 |
+---------------------------+
57 rows in set (0.00 sec)
  As you can see all the Hive metadata are stored in those tables.  Besides using the 'describe' command in Hive, you can also retrieve the column comments from the Hive metastore database, below is the SQL command I used to query the column comment we just added:  mysql> SELECT c.* FROM hive.TBLS t  JOIN hive.DBS d  ON t.DB_ID = d.DB_ID  JOIN hive.SDS s  ON t.SD_ID = s.SD_ID  JOIN hive.COLUMNS_V2 c  ON s.CD_ID = c.CD_ID  WHERE TBL_NAME = 'airline_tweets' AND d.NAME= 'default'  ORDER by INTEGER_IDX;
+-------+-------------------------+------------------------------+-----------+-------------+
| CD_ID | COMMENT                 | COLUMN_NAME                  | TYPE_NAME | INTEGER_IDX |
+-------+-------------------------+------------------------------+-----------+-------------+
|   141 | This is column tweet_id | tweet_id                     | string    |           0 |
|   141 | NULL                    | airline_sentiment            | string    |           1 |
|   141 | NULL                    | airline_sentiment_confidence | string    |           2 |
|   141 | NULL                    | negativereason               | string    |           3 |
|   141 | NULL                    | negativereason_confidence    | string    |           4 |
|   141 | NULL                    | airline                      | string    |           5 |
|   141 | NULL                    | airline_sentiment_gold       | string    |           6 |
|   141 | NULL                    | name                         | string    |           7 |
|   141 | NULL                    | negativereason_gold          | string    |           8 |
|   141 | NULL                    | retweet_count                | string    |           9 |
|   141 | NULL                    | text                         | string    |          10 |
|   141 | NULL                    | tweet_coord                  | string    |          11 |
|   141 | NULL                    | tweet_created                | string    |          12 |
|   141 | NULL                    | tweet_location               | string    |          13 |
|   141 | NULL                    | user_timezone                | string    |          14 |
+-------+-------------------------+------------------------------+-----------+-------------+
  And here is the SQL command for retrieving the table comment we just added:  mysql> SELECT d.* FROM hive.TBLS t  JOIN hive.TABLE_PARAMS d  ON t.TBL_ID = d.TBL_ID  WHERE TBL_NAME = 'airline_tweets';
+--------+-----------------------+----------------+
| TBL_ID | PARAM_KEY             | PARAM_VALUE    |
+--------+-----------------------+----------------+
|      1 | comment               | Airline Tweets |
|      1 | last_modified_by      | dsun           |
|      1 | last_modified_time    | 1532533100     |
|      1 | numFiles              | 0              |
|      1 | numRows               | 14872          |
|      1 | rawDataSize           | 21569185       |
|      1 | totalSize             | 0              |
|      1 | transient_lastDdlTime | 1532533100     |
+--------+-----------------------+----------------+
8 rows in set (0.00 sec)
  You can create your own SQL command if you need to retrieve commends for table & columns at the same time, all the tables are there.  Please 'Accept' the answer if you found it resolved your question.  Derek 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-24-2018
	
		
		11:08 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi Michael,  For table comments, there is a stackoverflow article around it.  For column comments, you can simply run the hive command 'DESCRIBE tablename;', and you should see a comment column in the results.    The easiest way would be using Apache Atlas, if you have Atlas installed, you should be able to see all the table/column metadata, including comments in the Atlas UI.  For instance, I added some comments to an existing Hive table, called 'airline_tweets'      Then I am able to see the table/column comments in Atlas      Atlas is great tool to use for enterprise data governance.  Hope that helps!  If you found it resolved the issue, please "accept" the answer, thanks. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-24-2018
	
		
		09:49 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Paul,  I ran into the same issue with CB 2.7.1.  I'm sure there must be a better way to get it resolved, but here are the steps I used to create an HDP3.0 blueprint first, then further create an HDP 3.0 cluster by using CB 2.7.1:  Step1: Click 'Blueprints' on the left navigation pane, then click 'CREATE BLUEPRINT', input the name, for instance 'HDP 3.0 - Data Science: Apache Spark 2, Apache Zeppelin'      Step2: Add the following JSON into the 'Text' field, and click 'CREATE'  {
  "Blueprints": {
    "blueprint_name": "hdp30-data-science-spark2-v4",
    "stack_name": "HDP",
    "stack_version": "3.0"
  },
  "settings": [
    {
      "recovery_settings": []
    },
    {
      "service_settings": [
        {
          "name": "HIVE",
          "credential_store_enabled": "false"
        }
      ]
    },
    {
      "component_settings": []
    }
  ],
  "configurations": [
    {
      "core-site": {
        "fs.trash.interval": "4320"
      }
    },
    {
      "hdfs-site": {
        "dfs.namenode.safemode.threshold-pct": "0.99"
      }
    },
    {
      "hive-site": {
        "hive.exec.compress.output": "true",
        "hive.merge.mapfiles": "true",
        "hive.server2.tez.initialize.default.sessions": "true",
        "hive.server2.transport.mode": "http"
      }
    },
    {
      "mapred-site": {
        "mapreduce.job.reduce.slowstart.completedmaps": "0.7",
        "mapreduce.map.output.compress": "true",
        "mapreduce.output.fileoutputformat.compress": "true"
      }
    },
    {
      "yarn-site": {
        "yarn.acl.enable": "true"
      }
    }
  ],
  "host_groups": [
    {
      "name": "master",
      "configurations": [],
      "components": [
        {
          "name": "APP_TIMELINE_SERVER"
        },
        {
          "name": "HDFS_CLIENT"
        },
        {
          "name": "HISTORYSERVER"
        },
        {
          "name": "HIVE_CLIENT"
        },
        {
          "name": "HIVE_METASTORE"
        },
        {
          "name": "HIVE_SERVER"
        },
        {
          "name": "JOURNALNODE"
        },
        {
          "name": "MAPREDUCE2_CLIENT"
        },
        {
          "name": "METRICS_COLLECTOR"
        },
        {
          "name": "METRICS_MONITOR"
        },
        {
          "name": "NAMENODE"
        },
        {
          "name": "RESOURCEMANAGER"
        },
        {
          "name": "SECONDARY_NAMENODE"
        },
        {
          "name": "LIVY2_SERVER"
        },
        {
          "name": "SPARK2_CLIENT"
        },
        {
          "name": "SPARK2_JOBHISTORYSERVER"
        },
        {
          "name": "TEZ_CLIENT"
        },
        {
          "name": "YARN_CLIENT"
        },
        {
          "name": "ZEPPELIN_MASTER"
        },
        {
          "name": "ZOOKEEPER_CLIENT"
        },
        {
          "name": "ZOOKEEPER_SERVER"
        }
      ],
      "cardinality": "1"
    },
    {
      "name": "worker",
      "configurations": [],
      "components": [
        {
          "name": "HIVE_CLIENT"
        },
        {
          "name": "TEZ_CLIENT"
        },
        {
          "name": "SPARK2_CLIENT"
        },
        {
          "name": "DATANODE"
        },
        {
          "name": "METRICS_MONITOR"
        },
        {
          "name": "NODEMANAGER"
        }
      ],
      "cardinality": "1+"
    },
    {
      "name": "compute",
      "configurations": [],
      "components": [
        {
          "name": "HIVE_CLIENT"
        },
        {
          "name": "TEZ_CLIENT"
        },
        {
          "name": "SPARK2_CLIENT"
        },
        {
          "name": "METRICS_MONITOR"
        },
        {
          "name": "NODEMANAGER"
        }
      ],
      "cardinality": "1+"
    }
  ]
}  Step3: You now should be able to see the newly added HDP3.0 blueprint, and create a cluster off it          Hope it helps!  If you found it resolved the issue, please "accept" the answer, thanks. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-10-2017
	
		
		06:13 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @vperiasamy I was experiencing a similar issue with my Atlas UI after this 'ABORTED' upgrade, and I did happen to try a different type of browser, and issue was resolved.  I tried with Ranger as well, and it's indeed a browser issue.  Thanks.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-10-2017
	
		
		03:43 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Ranger (0.7.0) UI hangs while loading the policies page, such as HDFS policies.  This started happening after a failed HDP upgrade, from 2.6.1 to 2.6.2.  Everything else inside Ranger works fine, such as audits, settings, & etcs.  I suspect the policy tables on the db side somehow got corrupted.  Any thoughts are appreciated.      Thanks. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Apache Ranger
			
    
	
		
		
		10-10-2017
	
		
		02:28 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Akhil S Naik updating the state column from either 'CURRENT' or 'INSTALLED' to 'UPGRADING' is causing Ambari fails to start for me:  ERROR: Exiting with exit code -1.
REASON: Ambari Server java process has stopped. Please check the logs for more information. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
        













