Member since
12-21-2015
57
Posts
7
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3964 | 08-25-2016 09:31 AM |
12-01-2016
05:29 AM
Is there a single command where you can display all installed components / services in an HDP stack installation plus their respective versions (not the HDP version, but the corresponding Apache version)? I know that this info is in the release notes, in Hortonworks website, but I'm always lost looking for the specific link. Neither I can find this in Ambari.
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
11-22-2016
10:27 AM
I have set a Ranger policy enabling a certain newuser to read/write/execute only on his own home directory in HDFS, say /user/<newuser>. While the policy certainly works on his own path, however, I do not want newuser to be able to read directories and files outside its own, which still happens when I do: hadoop fs -ls / Or on some other directories. Same thing happens when newuser is logged in in Hue. How do I do this in Ranger?
... View more
Labels:
- Labels:
-
Apache Ranger
11-18-2016
03:08 AM
Here's what I have for /etc/atlas/conf/application.properties atlas.enableTLS=false
atlas.graph.index.search.backend=elasticsearch
atlas.graph.index.search.directory=/var/lib/atlas/data/es
atlas.graph.index.search.elasticsearch.client-only=false
atlas.graph.index.search.elasticsearch.local-mode=true
atlas.graph.storage.backend=berkeleyje
atlas.graph.storage.directory=/var/lib/atlas/data/berkeley
atlas.lineage.hive.process.inputs.name=inputs
atlas.lineage.hive.process.outputs.name=outputs
atlas.lineage.hive.process.type.name=Process
atlas.lineage.hive.table.schema.query.hive_table=hive_table where name='%s'\, columns
atlas.lineage.hive.table.schema.query.Table=Table where name='%s'\, columns
atlas.lineage.hive.table.type.name=DataSet
atlas.notification.embedded=false
atlas.rest.address=http://<host>:21000
atlas.server.address.id1=<host>:21000
atlas.server.bind.address=<host>
atlas.server.ha.enabled=false
atlas.server.http.port=21000
atlas.server.https.port=21443
atlas.server.ids=id1
Unfortunately, I can't see a /var/lib/atlas existing. Should that path be created automatically by Ambari / HDP installation wizard?
... View more
11-17-2016
08:01 AM
Atlas is running but there is a warning sign (yellow) that says HTTP 503 response from Metadata Server Web UI. When I check the /var/log/atlas/application.log, I get: 2016-11-17 15:18:42,753 INFO - [main:] ~ Server starting with TLS ? false on po
rt 21000 (Main:153)
2016-11-17 15:18:42,756 INFO - [main:] ~ <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< (Mai
n:154)
2016-11-17 15:18:42,786 INFO - [main:] ~ Logging initialized @446ms (log:186)
2016-11-17 15:18:42,888 INFO - [main:] ~ jetty-9.2.12.v20150709 (Server:327)
2016-11-17 15:18:43,703 INFO - [main:] ~ Loading Guice modules (GuiceServletConfig:59)
2016-11-17 15:18:44,050 INFO - [main:] ~ Logged in user atlas (auth:SIMPLE) (LoginProcessor:80)
2016-11-17 15:18:44,343 INFO - [main:] ~ Jersey loading from packages: org.apache.atlas.web.resources,org.apache.atlas.web.params (GuiceServletConfig:84)
2016-11-17 15:18:44,773 WARN - [main:] ~ Failed startup of context o.e.j.w.WebAppContext@2fb860a{/,file:/usr/hdp/2.4.2.0-258/atlas/server/webapp/atlas/,STARTING}{/usr/hdp/current/atlas-server/server/webapp/atlas} (WebAppContext:514)
com.google.inject.CreationException: Unable to create injector, see the following errors:
1) Error in custom provider, java.lang.IllegalArgumentException: Could not instantiate implementation: com.thinkaurelius.titan.diskstorage.berkeleyje.BerkeleyJEStoreManager
at org.apache.atlas.RepositoryMetadataModule.configure(RepositoryMetadataModule.java:50)
at org.apache.atlas.RepositoryMetadataModule.configure(RepositoryMetadataModul
e.java:50)
while locating com.google.inject.throwingproviders.ThrowingProviderBinder$Result annotated with @com.google.inject.internal.UniqueAnnotations$Internal(value=1)
Caused by: java.lang.IllegalArgumentException: Could not instantiate implementation: com.thinkaurelius.titan.diskstorage.berkeleyje.BerkeleyJEStoreManager
at com.thinkaurelius.titan.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:55)
at com.thinkaurelius.titan.diskstorage.Backend.getImplementationClass(Backend.java:421)
at com.thinkaurelius.titan.diskstorage.Backend.getStorageManager(Backend.java:361)
at com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration.<init>(GraphDatabaseConfiguration.java:1275)
at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:93)
at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:73)
at org.apache.atlas.repository.graph.TitanGraphProvider.getGraphInstance(TitanGraphProvider.java:111)
at org.apache.atlas.repository.graph.TitanGraphProvider.get(TitanGraphProvider.java:142)
How do I resolve this? I am using Ambari v2.4.0.1 and HDP v2.4.2.
... View more
Labels:
- Labels:
-
Apache Atlas
11-14-2016
01:01 PM
I installed Solr, but its status is Stop (Red) in Ambari console. When I attempt to restart the service, which I already did repeatedly, the log shows it's already running: stderr: /var/lib/ambari-agent/data/errors-253.txt /usr/lib/python2.6/site-packages/resource_management/core/environment.py:165: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6
Logger.info("Skipping failure of {0} due to ignore_failures. Failure reason: {1}".format(resource, ex.message))
2016-11-14 18:58:11,634 - Solr is running, it cannot be started again
stdout: /var/lib/ambari-agent/data/output-253.txt 2016-11-14 18:58:06,871 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.0.0-1245
2016-11-14 18:58:06,871 - Checking if need to create versioned conf dir /etc/hadoop/2.5.0.0-1245/0
2016-11-14 18:58:06,871 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-11-14 18:58:06,923 - call returned (1, '/etc/hadoop/2.5.0.0-1245/0 exist already', '')
2016-11-14 18:58:06,924 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-11-14 18:58:06,950 - checked_call returned (0, '')
2016-11-14 18:58:06,951 - Ensuring that hadoop has the correct symlink structure
2016-11-14 18:58:06,951 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-11-14 18:58:07,151 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.0.0-1245
2016-11-14 18:58:07,151 - Checking if need to create versioned conf dir /etc/hadoop/2.5.0.0-1245/0
2016-11-14 18:58:07,151 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-11-14 18:58:07,177 - call returned (1, '/etc/hadoop/2.5.0.0-1245/0 exist already', '')
2016-11-14 18:58:07,178 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-11-14 18:58:07,200 - checked_call returned (0, '')
2016-11-14 18:58:07,201 - Ensuring that hadoop has the correct symlink structure
2016-11-14 18:58:07,201 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-11-14 18:58:07,205 - Group['hadoop'] {}
2016-11-14 18:58:07,209 - Group['users'] {}
2016-11-14 18:58:07,209 - Group['zeppelin'] {}
2016-11-14 18:58:07,209 - Group['solr'] {}
2016-11-14 18:58:07,210 - Group['knox'] {}
2016-11-14 18:58:07,210 - Group['ranger'] {}
2016-11-14 18:58:07,210 - Group['spark'] {}
2016-11-14 18:58:07,210 - Group['livy'] {}
2016-11-14 18:58:07,211 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,212 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,212 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,213 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger']}
2016-11-14 18:58:07,214 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,215 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,215 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,216 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,216 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-11-14 18:58:07,217 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-11-14 18:58:07,218 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,218 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,219 - User['solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,219 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,220 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,220 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,221 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,222 - User['accumulo'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,223 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-11-14 18:58:07,223 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,224 - User['mahout'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,226 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-11-14 18:58:07,226 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,227 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,228 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,229 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-11-14 18:58:07,230 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-11-14 18:58:07,274 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-11-14 18:58:07,285 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-11-14 18:58:07,286 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2016-11-14 18:58:07,287 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-11-14 18:58:07,289 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2016-11-14 18:58:07,294 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2016-11-14 18:58:07,294 - Group['hdfs'] {}
2016-11-14 18:58:07,295 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs']}
2016-11-14 18:58:07,295 - FS Type:
2016-11-14 18:58:07,296 - Directory['/etc/hadoop'] {'mode': 0755}
2016-11-14 18:58:07,316 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2016-11-14 18:58:07,318 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2016-11-14 18:58:07,335 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2016-11-14 18:58:07,347 - Skipping Execute[('setenforce', '0')] due to not_if
2016-11-14 18:58:07,347 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2016-11-14 18:58:07,356 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2016-11-14 18:58:07,356 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2016-11-14 18:58:07,374 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2016-11-14 18:58:07,380 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2016-11-14 18:58:07,381 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2016-11-14 18:58:07,397 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'}
2016-11-14 18:58:07,398 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2016-11-14 18:58:07,407 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2016-11-14 18:58:07,426 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2016-11-14 18:58:07,430 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2016-11-14 18:58:07,700 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.0.0-1245
2016-11-14 18:58:07,700 - Checking if need to create versioned conf dir /etc/hadoop/2.5.0.0-1245/0
2016-11-14 18:58:07,701 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-11-14 18:58:07,724 - call returned (1, '/etc/hadoop/2.5.0.0-1245/0 exist already', '')
2016-11-14 18:58:07,724 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-11-14 18:58:07,747 - checked_call returned (0, '')
2016-11-14 18:58:07,747 - Ensuring that hadoop has the correct symlink structure
2016-11-14 18:58:07,748 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-11-14 18:58:07,749 - Execute['/opt/lucidworks-hdpsearch/solr/bin/solr stop -all >> /var/log/service_solr/solr-service.log 2>&1'] {'environment': {'JAVA_HOME': '/usr/lib/jvm/java-1.7.0-oracle'}, 'user': 'solr'}
2016-11-14 18:58:08,041 - File['/var/run/solr/solr-8983.pid'] {'action': ['delete']}
2016-11-14 18:58:08,041 - Pid file /var/run/solr/solr-8983.pid is empty or does not exist
2016-11-14 18:58:08,042 - Directory['/opt/lucidworks-hdpsearch/solr'] {'owner': 'solr', 'create_parents': True, 'group': 'solr', 'mode': 0755, 'cd_access': 'a'}
2016-11-14 18:58:08,044 - Directory['/var/log/solr'] {'owner': 'solr', 'create_parents': True, 'group': 'solr', 'mode': 0755, 'cd_access': 'a'}
2016-11-14 18:58:08,044 - Directory['/var/log/service_solr'] {'owner': 'solr', 'create_parents': True, 'group': 'solr', 'mode': 0755, 'cd_access': 'a'}
2016-11-14 18:58:08,045 - Directory['/var/run/solr'] {'owner': 'solr', 'create_parents': True, 'group': 'solr', 'mode': 0755, 'cd_access': 'a'}
2016-11-14 18:58:08,045 - Directory['/etc/solr/conf'] {'owner': 'solr', 'create_parents': True, 'group': 'solr', 'mode': 0755, 'cd_access': 'a'}
2016-11-14 18:58:08,046 - Directory['/etc/solr/data_dir'] {'owner': 'solr', 'group': 'solr', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2016-11-14 18:58:08,047 - Execute[('chmod', '-R', '777', '/opt/lucidworks-hdpsearch/solr/server/solr-webapp')] {'sudo': True}
2016-11-14 18:58:08,104 - File['/opt/lucidworks-hdpsearch/solr/bin/solr.in.sh'] {'owner': 'solr', 'content': InlineTemplate(...)}
2016-11-14 18:58:08,106 - File['/etc/solr/conf/log4j.properties'] {'owner': 'solr', 'content': InlineTemplate(...)}
2016-11-14 18:58:08,128 - File['/etc/solr/data_dir/solr.xml'] {'owner': 'solr', 'content': Template('solr.xml.j2')}
2016-11-14 18:58:08,129 - HdfsResource['/user/solr'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'dfs_type': '', 'default_fs': 'hdfs://<host>:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': '/usr/bin/kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'solr', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', u'/mr-history/done', u'/apps/falcon']}
2016-11-14 18:58:08,133 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://<host>:50070/webhdfs/v1/user/solr?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmp7MLiQD 2>/tmp/tmp1gMY3t''] {'logoutput': None, 'quiet': False}
2016-11-14 18:58:08,186 - call returned (0, '')
2016-11-14 18:58:08,186 - call['export JAVA_HOME=/usr/lib/jvm/java-1.7.0-oracle; /opt/lucidworks-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost <host>:2181 -cmd get /solr/clusterstate.json'] {'timeout': 60}
2016-11-14 18:58:08,884 - call returned (1, 'Exception in thread "main" org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /solr/clusterstate.json\n\tat org.apache.zookeeper.KeeperException.create(KeeperException.java:111)\n\tat org.apache.zookeeper.KeeperException.create(KeeperException.java:51)\n\tat org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)\n\tat org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:345)\n\tat org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:342)\n\tat org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)\n\tat org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:342)\n\tat org.apache.solr.cloud.ZkCLI.main(ZkCLI.java:296)')
2016-11-14 18:58:08,884 - Execute['export JAVA_HOME=/usr/lib/jvm/java-1.7.0-oracle; /opt/lucidworks-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost <host>:2181 -cmd makepath /solr'] {'ignore_failures': True, 'user': 'solr'}
2016-11-14 18:58:09,302 - Skipping failure of Execute['export JAVA_HOME=/usr/lib/jvm/java-1.7.0-oracle; /opt/lucidworks-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost <host>:2181 -cmd makepath /solr'] due to ignore_failures. Failure reason: Execution of 'export JAVA_HOME=/usr/lib/jvm/java-1.7.0-oracle; /opt/lucidworks-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost <host>:2181 -cmd makepath /solr' returned 1. Exception in thread "main" org.apache.zookeeper.KeeperException$NodeExistsException: KeeperErrorCode = NodeExists for /solr
at org.apache.zookeeper.KeeperException.create(KeeperException.java:119)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783)
at org.apache.solr.common.cloud.SolrZkClient$10.execute(SolrZkClient.java:501)
at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)
at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:498)
at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:455)
at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:442)
at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:398)
at org.apache.solr.cloud.ZkCLI.main(ZkCLI.java:258)
2016-11-14 18:58:09,302 - HdfsResource['/solr'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'dfs_type': '', 'default_fs': 'hdfs://<host>:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': '/usr/bin/kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'solr', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/apps/hive/warehouse', u'/tmp', u'/app-logs', u'/mr-history/done', u'/apps/falcon']}
2016-11-14 18:58:09,303 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://<host>:50070/webhdfs/v1/solr?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpUGvE9U 2>/tmp/tmp9AYBHf''] {'logoutput': None, 'quiet': False}
2016-11-14 18:58:09,341 - call returned (0, '')
2016-11-14 18:58:09,341 - call['export JAVA_HOME=/usr/lib/jvm/java-1.7.0-oracle; /opt/lucidworks-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost <host>:2181 -cmd get /solr/clusterprops.json'] {'timeout': 60}
2016-11-14 18:58:09,743 - call returned (1, 'Exception in thread "main" org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /solr/clusterprops.json\n\tat org.apache.zookeeper.KeeperException.create(KeeperException.java:111)\n\tat org.apache.zookeeper.KeeperException.create(KeeperException.java:51)\n\tat org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)\n\tat org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:345)\n\tat org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:342)\n\tat org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)\n\tat org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:342)\n\tat org.apache.solr.cloud.ZkCLI.main(ZkCLI.java:296)')
2016-11-14 18:58:09,743 - call['export JAVA_HOME=/usr/lib/jvm/java-1.7.0-oracle; /opt/lucidworks-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost <host>:2181 -cmd get /solr/security.json'] {'timeout': 60}
2016-11-14 18:58:10,260 - call returned (1, 'Exception in thread "main" org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /solr/security.json\n\tat org.apache.zookeeper.KeeperException.create(KeeperException.java:111)\n\tat org.apache.zookeeper.KeeperException.create(KeeperException.java:51)\n\tat org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)\n\tat org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:345)\n\tat org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:342)\n\tat org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)\n\tat org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:342)\n\tat org.apache.solr.cloud.ZkCLI.main(ZkCLI.java:296)')
2016-11-14 18:58:10,260 - call['netstat -lnt | awk -v v1=8983 '$6 == "LISTEN" && $4 ~ ":"+v1''] {'timeout': 60}
2016-11-14 18:58:10,338 - call returned (0, '')
2016-11-14 18:58:10,339 - Solr port validation output:
2016-11-14 18:58:10,339 - call['/opt/lucidworks-hdpsearch/solr/bin/solr status'] {'timeout': 60}
2016-11-14 18:58:11,633 - call returned (0, 'Found 1 Solr nodes: \n\nSolr process 10244 running on port 8886\n{\n "solr_home":"/opt/ambari_infra_solr/data",\n "version":"5.5.2 8e5d40b22a3968df065dfc078ef81cbb031f0e4a - sarowe - 2016-06-21 11:44:11",\n "startTime":"2016-11-14T09:03:22.462Z",\n "uptime":"0 days, 1 hours, 54 minutes, 49 seconds",\n "memory":"150.5 MB (%7.7) of 981.4 MB",\n "cloud":{\n "ZooKeeper":"<host>:2181/infra-solr",\n "liveNodes":"1",\n "collections":"4"}}')
2016-11-14 18:58:11,634 - Solr status output: Found 1 Solr nodes:
Solr process 10244 running on port 8886
{
"solr_home":"/opt/ambari_infra_solr/data",
"version":"5.5.2 8e5d40b22a3968df065dfc078ef81cbb031f0e4a - sarowe - 2016-06-21 11:44:11",
"startTime":"2016-11-14T09:03:22.462Z",
"uptime":"0 days, 1 hours, 54 minutes, 49 seconds",
"memory":"150.5 MB (%7.7) of 981.4 MB",
"cloud":{
"ZooKeeper":"<host>:2181/infra-solr",
"liveNodes":"1",
"collections":"4"}}
2016-11-14 18:58:11,634 - Solr is running, it cannot be started again
Command failed after 1 tries
I am using the latest version, HDP 2.5.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Solr
09-19-2016
02:18 AM
Assuming that I installed four instances of PostgreSQL 9.3, and what if, for example, the Ranger database fails, it means failure also for HDFS and Hive security (among others). So these components (Ambari, Hive, Oozie, Ranger) are not entirely independent to warrant that a failure in their respective databases means the other will be operating smoothly. Someone suggested to me to do a single database instance for all four of the services in High Availability mode (master-slave, with warm-standby) or in multi-node cluster, four separate database instances (same distro and version presumably) in High Availability. Although for inexperienced DB admin (like me), this is quite a chore. As I have read from PostgreSQL documentation, there are a number of solutions for High Availability mode, like Shared Disk Failover, Transaction Log Shipping, etc. What solution did you employ for PostgreSQL HA. Can those who have done this in production cluster share how you did this? @Sunile Manjee
... View more
09-17-2016
03:53 AM
I'm following this tutorial on cluster installation using Ambari https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.0.1/bk_ambari-installation/content/database_requirements.html Ambari, Hive, Oozie and Ranger requires RDBMS. Is it a good idea for them to share a single database installation? Or should I install for each separately? Should separate RDBMS instances is the way to go, for example I choose Postgre 9.3, can I have multiple instances with same version?
... View more
Labels:
09-16-2016
04:09 AM
Hi, We're planning to put up a 10-20 node cluster, but for the meantime, we have a single node instance in Azure where we want to practice installing HDP by component. I have consulted the manual installation documentation here https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.0/bk_installing_manually_book/content/ch_getting_ready_chapter.html but so far I have not seen the documentation where to install Ambari and then install everything else (except ZooKeeper and core Hadoop) through it. I'm kind of confused and am not sure which one is better (1) Should I follow the manual installation instructions from the link above and install Ambari later? (2) Or, should I install Ambari first and install components through it? If the second option is the right direction, can you point me to some documentation on how to do it? Should we come to the part where we need to put up the multiple node cluster, which installation process is better? What's the pros/cons of doing (1) or (2)? Appreciate any helpful tips.
... View more
Labels:
- Labels:
-
Apache Ambari
09-01-2016
09:40 AM
I am using Hortonworks Sandbox 2.4, and I am trying to use the beeline command instead of the old hive. Based on Hortonworks tutorials, the default credentials are: UN: guest
PW: guest-password I'm interested in adding my own user, both as a user in the CentOS Linux and HDFS. In addition, I want to tinker around with Ranger's policy, effecting restrictions on user/s to be added.. How do I go about to add a new user (and set the password) that can access HiveServer2 using beeline?
... View more
Labels:
- Labels:
-
Apache Hive
08-30-2016
10:19 AM
1 Kudo
I am browsing on a number of Oozie examples and the three actions I'm most interested in are Sqoop, Hive and Pig actions. On some examples, the mapreduce.job.queuename property ( mapred.queue.name for older version) is configured, normally to default , like this: <action>
<sqoop>
...
<configuration>
<property>
<name>mapreduce.job.queuename</name>
<value>default</value>
</property>
</configuration>
</sqoop>
<action> On the other hand, other examples (especially on Pig, Hive and Hive2 actions) didn't specify any queue (no global configuration is specified either). Does configuring the queue necessary? What would happen if I didn't specify any?
... View more
Labels:
- Labels:
-
Apache Oozie
- « Previous
- Next »