Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 904 | 06-04-2025 11:36 PM | |
| 1505 | 03-23-2025 05:23 AM | |
| 743 | 03-17-2025 10:18 AM | |
| 2679 | 03-05-2025 01:34 PM | |
| 1782 | 03-03-2025 01:09 PM |
05-24-2018
01:55 PM
@JAy PaTel If you found this answer addressed your question, please take a moment to log
in and click the "accept" link on the answer.
... View more
05-24-2018
01:54 PM
@Mokkan Mok Hive jobs need a temporary working directory where hiveJars used during the execution are temporarily stored. See example $ hdfs dfs -ls /user/hive
Found 3 items
drwx------ - hive hdfs 0 2018-04-25 20:00 /user/hive/.Trash
drwxr-xr-x - hive hdfs 0 2018-04-13 23:04 /user/hive/.hiveJars
-rw-r--r-- 3 hive hdfs 642 2018-05-24 08:43 /user/hive/derby.log
$ hdfs dfs -ls /user/hive/.hiveJars
Found 1 items
-rw-r--r-- 3 hive hdfs 22006904 2018-04-13 23:04 /user/hive/.hiveJars/hive-exec-1.2.1000.2.6.2.0-205-79292bba9a3e076ad6d7a33c604b892fa0d45f6f60ae07a507e5e659a297f665.jar Hope that helps explain
... View more
05-24-2018
12:18 PM
1 Kudo
@JAy PaTel The above code snippet is for creating a user's home directory in HDFS. You will need to create the local Linux user, to do so a root or if you have sudo privileges do the following # useradd tempuser To set the password, you will be prompted twice # passwd tempuser Using sudo privileges $ sudo useradd tempuser See aabove $ sudo passwd tempuser This will also create a user home on the local Linux box in /home/tempuser this is different from the hdfs user home /user/tempuser with MUST exst if the tempuser is to run and hive queries etc Hope that helps
... View more
05-24-2018
07:09 AM
@RAUI To copy files between HDFS directories you need to have the correct permissions i.e in your example /apps/pqr/abc.txt move abc.txt to /apps/lmn/abc.txt. I assume the HDFS directory owners as pqr and lmn respectively where the former has to have write permission to /apps/lmn/ else you run the copy command ad the HDFS superuser hdfs and then change the file permissions like demonstrated below. Switch to hfds users # su - hdfs Now copy the abc.txt from source to destination $ hdfs dfs -cp /apps/pqr/abc.txt /apps/lmn/ check the permissions see the example $ hdfs dfs -ls /apps/lmn
Found 3 items
drwxr-xr-x+ - lmn hdfs 0 2018-05-24 00:40 /user/lmn/acls
drwxr-xr-x+ - hdfs hdfs 0 2018-05-24 00:40 /user/lmn/abc.txt
-rw-r--r-- 3 lmn hdfs 642 2018-05-24 08:45 /user/lmn/derby.log Change the file permissions recursively for the directory, this should also change the ownership of abc.txt $ hdfs dfs -chown -R lmn /apps/lmn I hope that helps
... View more
05-23-2018
03:23 PM
1 Kudo
@Erkan ŞİRİN Please ensure your Atlas, zookeeper and hbase is up and running the table ATLAS_ENTITY_AUDIT_EVENTS Check the entries in zookeeper see my entries ./bin/zkCli.sh
Connecting to localhost:2181
Welcome to ZooKeeper!
2018-05-23 16:45:58,797 - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1019] - Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
......
.......
[zk: localhost:2181(CONNECTED) 0] ls /
[registry, cluster, brokers, storm, zookeeper, infra-solr, hbase-unsecure, admin, isr_change_notification, templeton-hadoop, hiveserver2, controller_epoch, druid, rmstore, ambari-metrics-cluster, consumers, config]
[zk: localhost:2181(CONNECTED) 1] ls /hbase-unsecure/table
[ATLAS_ENTITY_AUDIT_EVENTS, hbase:meta, hbase:namespace, atlas_titan, hbase:acl]
[zk: localhost:2181(CONNECTED) 2] From your HBase shell hbase(main):001:0> list
TABLE
ATLAS_ENTITY_AUDIT_EVENTS
atlas_titan
2 row(s) in 18.5760 seconds
=> ["ATLAS_ENTITY_AUDIT_EVENTS", "atlas_titan"]
hbase(main):002:0>
Stop Atlas via Ambari. In hbase terminal, to disable hbase table, run this command. disable 'atlas_titan'
In hbase terminal, to drop hbase table, run this command. drop 'atlas_titan'
Start Atlas via Ambari. The above steps can be repeated for 'ATLAS_ENTITY_AUDIT_EVENTS' table if there is a requirement to wipe-out audit data as well. This above steps should reset atlas and start it as if it is a fresh installation Hope that helps
... View more
05-23-2018
01:57 PM
@Matthias Tewordt Hey what the latest error,can you share the stack trace
... View more
05-23-2018
01:56 PM
@RAUI Please can you give a concrete example of what you intend to do because someone cannot conceptualize with your explanation
... View more
05-23-2018
07:20 AM
@Matthias Tewordt Can you backup the below file cp /usr/hdf/current/registry/bootstrap/bootstrap-storage.sh /usr/hdf/current/registry/bootstrap/bootstrap-storage.sh.bak Then edit /usr/hdf/current/registry/bootstrap/bootstrap-storage.sh Update the following lines with the proxy information by adding values for -Dhttps.proxyHost=proxy_name
-Dhttps.proxyPort=xxxx Example: function dropTables {
${JAVA} -Dbootstrap.dir=$BOOTSTRAP_DIR -Dhttps.proxyHost=<YOUR_PROXY_HOST> -Dhttps.proxyPort=<YOUR_PROXY_PORT> -cp ${CLASSPATH} ${TABLE_INITIALIZER_MAIN_CLASS} -m ${MYSQL_JAR_URL_PATH} -c ${CONFIG_FILE_PATH} -s ${SCRIPT_ROOT_DIR} --drop
}
function createTables {
${JAVA} -Dbootstrap.dir=$BOOTSTRAP_DIR -Dhttps.proxyHost=<YOUR_PROXY_HOST> -Dhttps.proxyPort=<YOUR_PROXY_PORT> -cp ${CLASSPATH} ${TABLE_INITIALIZER_MAIN_CLASS} -m ${MYSQL_JAR_URL_PATH} -c ${CONFIG_FILE_PATH} -s ${SCRIPT_ROOT_DIR} --create
}
function checkStorageConnection {
${JAVA} -Dbootstrap.dir=$BOOTSTRAP_DIR -Dhttps.proxyHost=<YOUR_PROXY_HOST> -Dhttps.proxyPort=<YOUR_PROXY_PORT> -cp ${CLASSPATH} ${TABLE_INITIALIZER_MAIN_CLASS} -m ${MYSQL_JAR_URL_PATH} -c ${CONFIG_FILE_PATH} -s ${SCRIPT_ROOT_DIR} --check-connection
} The try restarting the Registry and SAM
... View more