Member since
04-13-2018
53
Posts
2
Kudos Received
0
Solutions
05-14-2019
12:50 PM
Hi All, I want to change database location in kerberized clutser. Please help me how to do? The actual location of the database is /data/mib/extern/dbr.db and /data/mib/extern/dbe.db. and move to /data/mib/in/<new path.db> Database contain tables (with data). without losing the data we have to change database location.can any one help on this?
... View more
- Tags:
- Hadoop Core
- HDFS
- Hive
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
04-15-2019
12:38 AM
Can anyone help on this? I checked keytabs in /etc/security/keytabs on working and non working nodes. Both keytabs same. What is the next step to resolve the issue
... View more
04-12-2019
01:30 PM
@Jay Kumar SenSharma
... View more
04-12-2019
01:27 PM
@Geoffrey Shelton Okot
... View more
04-12-2019
01:21 PM
Dears, Beeline commands are not getting executed from Oozie shell actions with Some user. Beeline is getting connected in some nodes and not in some nodes. Getting below error. "Error: Could not establish connection to jdbc:hive2://Hostname:10001/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2: org.apache.http.client.ClientProtocolException (state=08S01,code=0)" Possible reason : Oozie shell action is expecting kerberos authentication while executing beeline command. on some nodes keytab file present and may be not on few nodes. to be checked. On production cluster similar kind of actions working fine.
... View more
Labels:
- Labels:
-
Apache Oozie
01-24-2019
04:00 PM
Thanks for nice explanation, Now i got clear idea on NameNode High Availability
... View more
01-23-2019
03:22 PM
Hi All, If user connected Namenode through Gatewaynode, suddenly namenode goesdown, after few seconds standbynode became active node. After that standby namenode became active namenode,he can able to access without getting downtime after connecting to namenode which ip address he can able to see?
... View more
Labels:
- Labels:
-
Apache Hadoop
01-09-2019
10:01 PM
Above steps worked fine, but getting below error Connection failed to http://nn.hadoop.com:6080/login.jsp (Execution of 'curl --location-trusted -k --negotiate -u : -b /var/lib/ambari-agent/tmp/cookies/db72a0e6-ec32-46b8-b2c9-174e417b2c4e -c /var/lib/ambari-agent/tmp/cookies/db72a0e6-ec32-46b8-b2c9-174e417b2c4e -w '%{http_code}' http://nn.hadoop.com:6080/login.jsp --connect-timeout 5 --max-time 7 -o /dev/null 1>/tmp/tmpMOa5kZ 2>/tmp/tmpxYQlBP' returned 7. curl: (7) couldn't connect to host
000)
... View more
01-09-2019
09:20 PM
Installed single node cluster on Centos 6 and enabled kerberos on cluster. ranger not starting.. stderr: /var/lib/ambari-agent/data/errors-311.txt Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/ranger_admin.py", line 231, in <module>
RangerAdmin().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 375, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/ranger_admin.py", line 93, in start
self.configure(env, upgrade_type=upgrade_type, setup_db=params.stack_supports_ranger_setup_db_on_start)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 120, in locking_configure
original_configure(obj, *args, **kw)
File "/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/ranger_admin.py", line 135, in configure
setup_ranger_db()
File "/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/setup_ranger_xml.py", line 265, in setup_ranger_db
user=params.unix_user,
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 262, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'ambari-python-wrap /usr/hdp/current/ranger-admin/dba_script.py -q' returned 1. 2019-01-09 10:12:14,831 [I] Running DBA setup script. QuiteMode:True
2019-01-09 10:12:14,832 [I] Using Java:/usr/jdk64/jdk1.8.0_112/bin/java
2019-01-09 10:12:14,832 [I] DB FLAVOR:MYSQL
2019-01-09 10:12:14,832 [I] DB Host:nn.hadoop.com
2019-01-09 10:12:14,832 [I] ---------- Verifying DB root password ----------
2019-01-09 10:12:14,833 [I] DBA root user password validated
2019-01-09 10:12:14,833 [I] ---------- Verifying Ranger Admin db user password ----------
2019-01-09 10:12:14,833 [I] admin user password validated
2019-01-09 10:12:14,834 [I] ---------- Creating Ranger Admin db user ----------
2019-01-09 10:12:14,834 [JISQL] /usr/jdk64/jdk1.8.0_112/bin/java -cp /usr/hdp/2.6.5.1050-37/ranger-admin/ews/lib/mysql-connector-java.jar:/usr/hdp/current/ranger-admin/jisql/lib/* org.apache.util.sql.Jisql -driver mysqlconj -cstring jdbc:mysql://nn.hadoop.com/mysql -u ranger -p '********' -noheader -trim -c \; -query "SELECT version();"
SQLException : SQL state: 42000 com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Access denied for user 'ranger'@'%' to database 'mysql' ErrorCode: 1044
2019-01-09 10:12:16,281 [E] Can't establish db connection.. Exiting.. stdout: /var/lib/ambari-agent/data/output-311.txt 2019-01-09 10:12:13,000 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.5.1050-37 -> 2.6.5.1050-37
2019-01-09 10:12:13,016 - Using hadoop conf dir: /usr/hdp/2.6.5.1050-37/hadoop/conf
2019-01-09 10:12:13,218 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.5.1050-37 -> 2.6.5.1050-37
2019-01-09 10:12:13,218 - Using hadoop conf dir: /usr/hdp/2.6.5.1050-37/hadoop/conf
2019-01-09 10:12:13,219 - Group['ranger'] {}
2019-01-09 10:12:13,233 - Group['hdfs'] {}
2019-01-09 10:12:13,234 - Group['hadoop'] {}
2019-01-09 10:12:13,234 - Group['users'] {}
2019-01-09 10:12:13,234 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-09 10:12:13,235 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-09 10:12:13,236 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users'], 'uid': None}
2019-01-09 10:12:13,236 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger'], 'uid': None}
2019-01-09 10:12:13,237 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs'], 'uid': None}
2019-01-09 10:12:13,237 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-09 10:12:13,238 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-09 10:12:13,239 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2019-01-09 10:12:13,266 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2019-01-09 10:12:13,287 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2019-01-09 10:12:13,288 - Group['hdfs'] {}
2019-01-09 10:12:13,289 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hdfs']}
2019-01-09 10:12:13,289 - FS Type:
2019-01-09 10:12:13,290 - Directory['/etc/hadoop'] {'mode': 0755}
2019-01-09 10:12:13,303 - File['/usr/hdp/2.6.5.1050-37/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'root', 'group': 'hadoop'}
2019-01-09 10:12:13,329 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2019-01-09 10:12:13,348 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2019-01-09 10:12:13,469 - Skipping Execute[('setenforce', '0')] due to not_if
2019-01-09 10:12:13,469 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2019-01-09 10:12:13,535 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2019-01-09 10:12:13,535 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2019-01-09 10:12:13,563 - File['/usr/hdp/2.6.5.1050-37/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'root'}
2019-01-09 10:12:13,576 - File['/usr/hdp/2.6.5.1050-37/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'root'}
2019-01-09 10:12:13,583 - File['/usr/hdp/2.6.5.1050-37/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2019-01-09 10:12:13,606 - File['/usr/hdp/2.6.5.1050-37/hadoop/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2019-01-09 10:12:13,607 - File['/usr/hdp/2.6.5.1050-37/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2019-01-09 10:12:13,622 - File['/usr/hdp/2.6.5.1050-37/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2019-01-09 10:12:13,655 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644}
2019-01-09 10:12:13,678 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2019-01-09 10:12:13,682 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
2019-01-09 10:12:14,007 - The unlimited key JCE policy is required, and appears to have been installed.
2019-01-09 10:12:14,397 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.5.1050-37 -> 2.6.5.1050-37
2019-01-09 10:12:14,402 - File['/usr/hdp/current/ranger-admin/ews/lib/mysql-connector-java.jar'] {'action': ['delete']}
2019-01-09 10:12:14,403 - Deleting File['/usr/hdp/current/ranger-admin/ews/lib/mysql-connector-java.jar']
2019-01-09 10:12:14,466 - File['/var/lib/ambari-agent/tmp/mysql-connector-java.jar'] {'content': DownloadSource('http://nn.hadoop.com:8080/resources/mysql-connector-java.jar'), 'mode': 0644}
2019-01-09 10:12:14,466 - Not downloading the file from http://nn.hadoop.com:8080/resources/mysql-connector-java.jar, because /var/lib/ambari-agent/tmp/mysql-connector-java.jar already exists
2019-01-09 10:12:14,498 - Execute[('cp', '--remove-destination', '/var/lib/ambari-agent/tmp/mysql-connector-java.jar', '/usr/hdp/2.6.5.1050-37/ranger-admin/ews/lib')] {'path': ['/bin', '/usr/bin/'], 'sudo': True}
2019-01-09 10:12:14,520 - File['/usr/hdp/2.6.5.1050-37/ranger-admin/ews/lib/mysql-connector-java.jar'] {'mode': 0644}
2019-01-09 10:12:14,521 - ModifyPropertiesFile['/usr/hdp/2.6.5.1050-37/ranger-admin/install.properties'] {'owner': 'ranger', 'properties': ...}
2019-01-09 10:12:14,565 - Modifying existing properties file: /usr/hdp/2.6.5.1050-37/ranger-admin/install.properties
2019-01-09 10:12:14,576 - File['/usr/hdp/2.6.5.1050-37/ranger-admin/install.properties'] {'owner': 'ranger', 'content': ..., 'group': None, 'mode': None, 'encoding': 'utf-8'}
2019-01-09 10:12:14,577 - Writing File['/usr/hdp/2.6.5.1050-37/ranger-admin/install.properties'] because contents don't match
2019-01-09 10:12:14,577 - ModifyPropertiesFile['/usr/hdp/2.6.5.1050-37/ranger-admin/install.properties'] {'owner': 'ranger', 'properties': {'SQL_CONNECTOR_JAR': '/usr/hdp/2.6.5.1050-37/ranger-admin/ews/lib/mysql-connector-java.jar'}}
2019-01-09 10:12:14,578 - Modifying existing properties file: /usr/hdp/2.6.5.1050-37/ranger-admin/install.properties
2019-01-09 10:12:14,579 - File['/usr/hdp/2.6.5.1050-37/ranger-admin/install.properties'] {'owner': 'ranger', 'content': ..., 'group': None, 'mode': None, 'encoding': 'utf-8'}
2019-01-09 10:12:14,579 - Writing File['/usr/hdp/2.6.5.1050-37/ranger-admin/install.properties'] because contents don't match
2019-01-09 10:12:14,581 - ModifyPropertiesFile['/usr/hdp/current/ranger-admin/install.properties'] {'owner': 'ranger', 'properties': {'audit_store': 'solr'}}
2019-01-09 10:12:14,581 - Modifying existing properties file: /usr/hdp/current/ranger-admin/install.properties
2019-01-09 10:12:14,582 - File['/usr/hdp/current/ranger-admin/install.properties'] {'owner': 'ranger', 'content': ..., 'group': None, 'mode': None, 'encoding': 'utf-8'}
2019-01-09 10:12:14,582 - Setting up Ranger DB and DB User
2019-01-09 10:12:14,582 - Execute['ambari-python-wrap /usr/hdp/current/ranger-admin/dba_script.py -q'] {'logoutput': True, 'environment': {'RANGER_ADMIN_HOME': '/usr/hdp/current/ranger-admin', 'JAVA_HOME': '/usr/jdk64/jdk1.8.0_112'}, 'user': 'ranger'}
2019-01-09 10:12:14,831 [I] Running DBA setup script. QuiteMode:True
2019-01-09 10:12:14,832 [I] Using Java:/usr/jdk64/jdk1.8.0_112/bin/java
2019-01-09 10:12:14,832 [I] DB FLAVOR:MYSQL
2019-01-09 10:12:14,832 [I] DB Host:nn.hadoop.com
2019-01-09 10:12:14,832 [I] ---------- Verifying DB root password ----------
2019-01-09 10:12:14,833 [I] DBA root user password validated
2019-01-09 10:12:14,833 [I] ---------- Verifying Ranger Admin db user password ----------
2019-01-09 10:12:14,833 [I] admin user password validated
2019-01-09 10:12:14,834 [I] ---------- Creating Ranger Admin db user ----------
2019-01-09 10:12:14,834 [JISQL] /usr/jdk64/jdk1.8.0_112/bin/java -cp /usr/hdp/2.6.5.1050-37/ranger-admin/ews/lib/mysql-connector-java.jar:/usr/hdp/current/ranger-admin/jisql/lib/* org.apache.util.sql.Jisql -driver mysqlconj -cstring jdbc:mysql://nn.hadoop.com/mysql -u ranger -p '********' -noheader -trim -c \; -query "SELECT version();"
SQLException : SQL state: 42000 com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Access denied for user 'ranger'@'%' to database 'mysql' ErrorCode: 1044
2019-01-09 10:12:16,281 [E] Can't establish db connection.. Exiting..
Command failed after 1 tries
... View more
Labels:
01-08-2019
06:09 PM
1 Kudo
error-ambari.jpg please help i am getting error while ambari-sevrer installation stderr: /var/lib/ambari-agent/data/errors-99.txt Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-INSTALL/scripts/hook.py", line 37, in <module>
BeforeInstallHook().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 375, in execute
method(env)
File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-INSTALL/scripts/hook.py", line 34, in hook
install_packages()
File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-INSTALL/scripts/shared_initialization.py", line 37, in install_packages
retry_count=params.agent_stack_retry_count)
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/package/__init__.py", line 53, in action_install
self.install_package(package_name, self.resource.use_repos, self.resource.skip_repos)
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/package/yumrpm.py", line 264, in install_package
self.checked_call_with_retries(cmd, sudo=True, logoutput=self.get_logoutput())
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/package/__init__.py", line 266, in checked_call_with_retries
return self._call_with_retries(cmd, is_checked=True, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/package/__init__.py", line 283, in _call_with_retries
code, out = func(cmd, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/bin/yum -d 0 -e 0 -y install hdp-select' returned 1. Error: Cannot retrieve repository metadata (repomd.xml) for repository: HDP-2.6-GPL-repo-1. Please verify its path and try again stdout: /var/lib/ambari-agent/data/output-99.txt 2019-01-08 09:37:48,171 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=None -> 2.6
2019-01-08 09:37:48,172 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2019-01-08 09:37:48,173 - Group['hdfs'] {}
2019-01-08 09:37:48,175 - Group['hadoop'] {}
2019-01-08 09:37:48,176 - Group['users'] {}
2019-01-08 09:37:48,176 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-08 09:37:48,177 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-08 09:37:48,177 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users'], 'uid': None}
2019-01-08 09:37:48,178 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs'], 'uid': None}
2019-01-08 09:37:48,178 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-08 09:37:48,179 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-01-08 09:37:48,179 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2019-01-08 09:37:48,180 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2019-01-08 09:37:48,184 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2019-01-08 09:37:48,184 - Group['hdfs'] {}
2019-01-08 09:37:48,185 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hdfs']}
2019-01-08 09:37:48,185 - FS Type:
2019-01-08 09:37:48,185 - Directory['/etc/hadoop'] {'mode': 0755}
2019-01-08 09:37:48,185 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2019-01-08 09:37:48,202 - Repository['HDP-2.6-repo-1'] {'append_to_file': False, 'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.6.5.1050', 'action': ['create'], 'components': ['HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2019-01-08 09:37:48,209 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': InlineTemplate(...)}
2019-01-08 09:37:48,210 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because it doesn't exist
2019-01-08 09:37:48,211 - Repository['HDP-2.6-GPL-repo-1'] {'append_to_file': True, 'base_url': 'http://public-repo-1.hortonworks.com/HDP-GPL/centos6/2.x/updates/2.6.5.1050', 'action': ['create'], 'components': ['HDP-GPL', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2019-01-08 09:37:48,213 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-2.6-repo-1]\nname=HDP-2.6-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.6.5.1050\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-2.6-GPL-repo-1]\nname=HDP-2.6-GPL-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP-GPL/centos6/2.x/updates/2.6.5.1050\n\npath=/\nenabled=1\ngpgcheck=0'}
2019-01-08 09:37:48,213 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2019-01-08 09:37:48,213 - Repository['HDP-UTILS-1.1.0.22-repo-1'] {'append_to_file': True, 'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.22/repos/centos6', 'action': ['create'], 'components': ['HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2019-01-08 09:37:48,216 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-2.6-repo-1]\nname=HDP-2.6-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.6.5.1050\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-2.6-GPL-repo-1]\nname=HDP-2.6-GPL-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP-GPL/centos6/2.x/updates/2.6.5.1050\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-UTILS-1.1.0.22-repo-1]\nname=HDP-UTILS-1.1.0.22-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.22/repos/centos6\n\npath=/\nenabled=1\ngpgcheck=0'}
2019-01-08 09:37:48,216 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2019-01-08 09:37:48,217 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2019-01-08 09:37:48,351 - Skipping installation of existing package unzip
2019-01-08 09:37:48,352 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2019-01-08 09:37:48,434 - Skipping installation of existing package curl
2019-01-08 09:37:48,435 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2019-01-08 09:37:48,518 - Installing package hdp-select ('/usr/bin/yum -d 0 -e 0 -y install hdp-select')
2019-01-08 09:37:48,926 - Execution of '/usr/bin/yum -d 0 -e 0 -y install hdp-select' returned 1. Error: Cannot retrieve repository metadata (repomd.xml) for repository: HDP-2.6-GPL-repo-1. Please verify its path and try again
2019-01-08 09:37:48,927 - Failed to install package hdp-select. Executing '/usr/bin/yum clean metadata'
2019-01-08 09:37:49,100 - Retrying to install package hdp-select after 30 seconds
2019-01-08 09:38:20,972 - Skipping stack-select on SMARTSENSE because it does not exist in the stack-select package structure.
Command failed after 1 tries
... View more
Labels:
11-02-2018
09:16 PM
1 Kudo
cleared certification ,Thanks ..
... View more
08-02-2018
06:54 PM
Thanks a lot @amarnath reddy pappuI search *.lock files in /etc but did not find any file with name of .lock. Please help me
... View more
08-02-2018
03:58 PM
@Geoffrey Shelton Okot Hello Sir any update???
... View more
08-02-2018
02:52 PM
Can anyone resolve the issue asap. stderr: Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py", line 35, in
BeforeAnyHook().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 353, in execute
method(env)
File "/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py", line 29, in hook
setup_users()
File "/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/shared_initialization.py", line 51, in setup_users
fetch_nonlocal_groups = params.fetch_nonlocal_groups,
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/accounts.py", line 90, in action_create
shell.checked_call(command, sudo=True)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'useradd -m -G hadoop -g hadoop yarn-ats' returned 1. useradd: existing lock file /etc/passwd.lock without a PID
useradd: cannot lock /etc/passwd; try again later.
Error: Error: Unable to run the custom hook script ['/usr/bin/python', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py', 'ANY', '/var/lib/ambari-agent/data/command-409.json', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY', '/var/lib/ambari-agent/data/structured-out-409.json', 'INFO', '/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1_2', '']
stdout:
2018-08-02 10:46:52,940 - Stack Feature Version Info: Cluster Stack=3.0, Command Stack=None, Command Version=None -> 3.0
2018-08-02 10:46:52,949 - Using hadoop conf dir: /usr/hdp/3.0.0.0-1634/hadoop/conf
2018-08-02 10:46:52,950 - Group['hdfs'] {}
2018-08-02 10:46:52,951 - Group['hadoop'] {}
2018-08-02 10:46:52,951 - Group['users'] {}
2018-08-02 10:46:52,952 - User['yarn-ats'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-08-02 10:46:52,952 - Adding user User['yarn-ats']
Error: Error: Unable to run the custom hook script ['/usr/bin/python', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py', 'ANY', '/var/lib/ambari-agent/data/command-409.json', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY', '/var/lib/ambari-agent/data/structured-out-409.json', 'INFO', '/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1_2', '']
2018-08-02 10:46:52,991 - The repository with version 3.0.0.0-1634 for this command has been marked as resolved. It will be used to report the version of the component which was installed
Command failed after 1 tries
... View more
- Tags:
- Hadoop Core
- hdpca
08-01-2018
07:17 PM
Did u find any solution?
... View more
08-01-2018
06:04 PM
Please check all attachments
... View more
08-01-2018
05:51 PM
Please check below
... View more
08-01-2018
05:45 PM
Fresh Installation on Centos 7
... View more
08-01-2018
05:43 PM
Thanks for you replying..!! Yesterday I installed ambari 3.0 version and HDP 2.7 on centos 7. Cluster size is 5 nodes (2 master nodes & 3 salves) Today i started to add a service MR+YARN getting error " Unable to Unable to run the custom hook script " Till i did not added any service
... View more
08-01-2018
04:29 PM
@Artem Ervits @Jay Kumar SenSharma @Geoffrey Shelton Okot stderr: Traceback (most recent call last): File "/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py", line 35, in BeforeAnyHook().execute() File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 353, in execute method(env) File "/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py", line 29, in hook setup_users() File "/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/shared_initialization.py", line 51, in setup_users fetch_nonlocal_groups = params.fetch_nonlocal_groups, File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__ self.env.run() File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run self.run_action(resource, action) File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action provider_action() File "/usr/lib/ambari-agent/lib/resource_management/core/providers/accounts.py", line 90, in action_create shell.checked_call(command, sudo=True) File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner result = function(command, **kwargs) File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns) File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper result = _call(command, **kwargs_copy) File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call raise ExecutionFailed(err_msg, code, out, err) resource_management.core.exceptions.ExecutionFailed: Execution of 'useradd -m -G hadoop -g hadoop yarn-ats' returned 1. useradd: existing lock file /etc/passwd.lock without a PID useradd: cannot lock /etc/passwd; try again later. Error: Error: Unable to run the custom hook script ['/usr/bin/python', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py', 'ANY', '/var/lib/ambari-agent/data/command-198.json', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY', '/var/lib/ambari-agent/data/structured-out-198.json', 'INFO', '/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1_2', ''] stdout: 2018-08-01 10:49:04,734 - Stack Feature Version Info: Cluster Stack=3.0, Command Stack=None, Command Version=None -> 3.0 2018-08-01 10:49:04,760 - Using hadoop conf dir: /usr/hdp/3.0.0.0-1634/hadoop/conf 2018-08-01 10:49:04,762 - Group['hdfs'] {} 2018-08-01 10:49:04,764 - Group['hadoop'] {} 2018-08-01 10:49:04,765 - Group['users'] {} 2018-08-01 10:49:04,765 - User['yarn-ats'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None} 2018-08-01 10:49:04,766 - Adding user User['yarn-ats'] Error: Error: Unable to run the custom hook script ['/usr/bin/python', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py', 'ANY', '/var/lib/ambari-agent/data/command-198.json', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY', '/var/lib/ambari-agent/data/structured-out-198.json', 'INFO', '/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1_2', ''] 2018-08-01 10:49:04,811 - The repository with version 3.0.0.0-1634 for this command has been marked as resolved. It will be used to report the version of the component which was installed
Command failed after 1 tries
... View more
08-01-2018
02:55 PM
@Jay Kumar SenSharma @Geoffrey Shelton Okot I installed laterversion of HDP 3.0 and Ambari 2.7 Please help on this error. stderr: Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py", line 35, in
BeforeAnyHook().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 353, in execute
method(env)
File "/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py", line 29, in hook
setup_users()
File "/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/shared_initialization.py", line 51, in setup_users
fetch_nonlocal_groups = params.fetch_nonlocal_groups,
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/accounts.py", line 90, in action_create
shell.checked_call(command, sudo=True)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'useradd -m -G hadoop -g hadoop yarn-ats' returned 1. useradd: existing lock file /etc/passwd.lock without a PID
useradd: cannot lock /etc/passwd; try again later.
Error: Error: Unable to run the custom hook script ['/usr/bin/python', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py', 'ANY', '/var/lib/ambari-agent/data/command-198.json', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY', '/var/lib/ambari-agent/data/structured-out-198.json', 'INFO', '/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1_2', '']
stdout:
2018-08-01 10:49:04,734 - Stack Feature Version Info: Cluster Stack=3.0, Command Stack=None, Command Version=None -> 3.0
2018-08-01 10:49:04,760 - Using hadoop conf dir: /usr/hdp/3.0.0.0-1634/hadoop/conf
2018-08-01 10:49:04,762 - Group['hdfs'] {}
2018-08-01 10:49:04,764 - Group['hadoop'] {}
2018-08-01 10:49:04,765 - Group['users'] {}
2018-08-01 10:49:04,765 - User['yarn-ats'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-08-01 10:49:04,766 - Adding user User['yarn-ats']
Error: Error: Unable to run the custom hook script ['/usr/bin/python', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY/scripts/hook.py', 'ANY', '/var/lib/ambari-agent/data/command-198.json', '/var/lib/ambari-agent/cache/stack-hooks/before-ANY', '/var/lib/ambari-agent/data/structured-out-198.json', 'INFO', '/var/lib/ambari-agent/tmp', 'PROTOCOL_TLSv1_2', '']
2018-08-01 10:49:04,811 - The repository with version 3.0.0.0-1634 for this command has been marked as resolved. It will be used to report the version of the component which was installed
Command failed after 1 tries
... View more
- Tags:
- Hadoop Core
- hdpca
07-30-2018
11:05 AM
Please send any link to write practice exam HDPCA
... View more
- Tags:
- Hadoop Core
- hdpca
07-05-2018
04:44 PM
05-14-2018
08:56 PM
log4j:WARN No such property [maxFileSize] in org.apache.log4j.DailyRollingFileAp pender.
Logging initialized using configuration in file:/etc/hive/2.6.4.0-91/0/hive-log4 j.properties
Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/user/root/.hiveJars":hdfs:hdfs:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:353)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:325)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:246)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1950)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1934)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1917)
at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4181)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1109)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:645)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:582)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/user/root/.hiveJars":hdfs:hdfs:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:353)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:325)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:246)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1950)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1934)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1917)
at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4181)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1109)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:645)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3089)
at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:3057)
at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1181)
at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1177)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1195)
at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1169)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1924)
at org.apache.hadoop.hive.ql.exec.tez.DagUtils.getDefaultDestDir(DagUtils.java:792)
at org.apache.hadoop.hive.ql.exec.tez.DagUtils.getHiveJarDirectory(DagUtils.java:897)
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.createJarLocalResource(TezSessionState.java:367)
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:161)
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:116)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:579)
... 8 more
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=root, access=WRITE, inode="/user/root/.hiveJars":hdfs:hdfs:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:353)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:325)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:246)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1950)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1934)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1917)
at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4181)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1109)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:645)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1554)
at org.apache.hadoop.ipc.Client.call(Client.java:1498)
at org.apache.hadoop.ipc.Client.call(Client.java:1398)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at com.sun.proxy.$Proxy12.mkdirs(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:610)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:291)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:203)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:185)
at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3087)
... 21 more
... View more
- Tags:
- Data Processing
- Hive
Labels:
- Labels:
-
Apache Hive