Member since
09-26-2016
74
Posts
4
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1399 | 08-06-2018 06:55 PM | |
731 | 12-21-2017 04:28 PM | |
613 | 11-03-2017 05:07 PM | |
974 | 03-20-2017 03:37 PM | |
3829 | 03-06-2017 03:54 PM |
07-23-2020
08:10 AM
This can also happen if you don't disable SELinux. It will hang until the command is run. setenforce 0
... View more
07-02-2019
05:53 PM
Ambari agent could also fail due to a bug which requires: force_https_protocol=PROTOCOL_TLSv1_2
... View more
03-05-2019
07:38 PM
For production, stay on 2.6.2 and HDP 2.6.4. Hopefully, they integrate Hue 4 into HDP now that Cloudera is on board. Makes no sense getting rid of features then offering broken half baked solutions.
... View more
11-28-2018
07:55 PM
If that's the case, then why does Ambari have a login page without https? Sometimes it's useful to setup the login first, then add a security layer. It helps with troubleshooting and not having a login for Ambari (for example) would be confusing! So why is this any different?
... View more
09-21-2018
12:39 PM
Did you ever find a solution?
... View more
08-06-2018
06:55 PM
For those facing this issue with hadoop_logs (Log Search) Delete the Log Search service from Ambari UI Go into the zookeeper shell /usr/hdp/current/zookeeper-client/bin/zkCli.sh rmr /infra-solr/configs/hadoop_logs Open up solr and delete the shards/collection for hadoop_logs SSH into your SOLR servers as root/sudo Delete /opt/ambari_infra_solr/data/hadoop_logs* Make sure you have at least 2 instances of SOLR installed Install Log Search on a new node (a SOLR instance should be installed on this node as well) Log search should be running now
... View more
06-26-2018
05:00 PM
For future readers, this problem/symptom can occur due to hive.exec.scratchdir creating too many subdirectories. In one case, I had over 1,400,000 subdirectories under /tmp/hive To fix:
hdfs dfs -rm -r /tmp/hive/* Restart Hive from Ambari Test Hive View
... View more
05-15-2018
05:46 PM
Fresh cluster (2.6.4) was working, restarted zookeeper and now seeing multiple errors. How can I fix this and why doesn't Solr just send new data to zookeeper? solr-problem.png One Log Example: null:org.apache.solr.common.SolrException: SolrCore 'hadoop_logs_shard0_replica1' is not available due to init failure: Specified config does not exist in ZooKeeper: hadoop_logs
at org.apache.solr.core.CoreContainer.getCore(CoreContainer.java:1071)
at org.apache.solr.servlet.HttpSolrCall.init(HttpSolrCall.java:252)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:414)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.eclipse.jetty.server.Server.handle(Server.java:499)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.common.cloud.ZooKeeperException: Specified config does not exist in ZooKeeper: hadoop_logs
at org.apache.solr.common.cloud.ZkStateReader.readConfigName(ZkStateReader.java:161)
at org.apache.solr.cloud.CloudConfigSetService.createCoreResourceLoader(CloudConfigSetService.java:36)
at org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:75)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:818)
at org.apache.solr.core.CoreContainer.access$000(CoreContainer.java:90)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:473)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:464)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:231)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
... View more
Labels:
- Labels:
-
Apache Solr
03-12-2018
03:22 PM
I agree with your concerns, on top of that HUE is a better UI (especially when using workflows). The Ambari views contain quite a few bugs/lacking simple features.
... View more
02-19-2018
02:56 PM
Any ideas as to why the UI drops connections?
... View more
12-28-2017
05:46 PM
This site can’t provide a secure connection server1.com.corp uses an unsupported protocol. ERR_SSL_VERSION_OR_CIPHER_MISMATCH HIDE DETAILS Unsupported protocol The client and server don't support a common SSL protocol version or cipher suite.
... View more
12-28-2017
05:45 PM
Doesn't work, This site can’t provide a secure connection my.server.com.corp uses an unsupported protocol. ERR_SSL_VERSION_OR_CIPHER_MISMATCH 0) || diagnose" jsvalues=".detailsText:details; .hideDetailsText:hideDetails;" jstcache="3" style="border-width: 0px; border-style: initial; border-color: initial; border-radius: 2px; color: rgb(105, 105, 105); float: none; font-size: 0.875em; padding: 10px 0px; transition: box-shadow 200ms cubic-bezier(0.4, 0, 0.2, 1); user-select: none; background-image: inherit; background-position: inherit; background-size: inherit; background-repeat: inherit; background-attachment: inherit; background-origin: inherit; background-clip: inherit; text-transform: uppercase; box-shadow: Unsupported protocol The client and server don't support a common SSL protocol version or cipher suite.
... View more
12-21-2017
04:58 PM
No, as long as the agents are already installed you should be good to go.
... View more
12-21-2017
04:28 PM
Yes, you should be able to passwordless SSH to all nodes FROM Ambari. That way, the agents will be able to install automatically. You don't want to be installing Ambari agents manually. https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.1.0/bk_ambari-installation/content/set_up_password-less_ssh.html
... View more
12-09-2017
02:57 AM
Check your /etc/hosts file and make sure it has all host entries, also check that you have pulled the correct version of Ambari for your OS or did you pull CentOS 6 or 7. Mismatches can create problems with python, it doesn't look like you have that issue, but if you are using satellite this may happen if the wrong link is used. Lastly, check the firewalls.
... View more
11-03-2017
05:07 PM
1 Kudo
You achieve this by limiting access via firewall rules, other than that KNOX + Kerberos is the built in method. Some resources: Secure Authentication: The core Hadoop uses Kerberos and Hadoop delegation tokens for security. WebHDFS also uses Kerberos (SPNEGO) and Hadoop delegation tokens for authentication. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_security/content/configure_webhdfs_for_knox.html https://www.cloudera.com/documentation/enterprise/5-9-x/topics/cdh_sg_secure_webhdfs_config.html
... View more
10-20-2017
03:49 PM
Try increasing the polling intervals on some of the processors. This can help the CPU + UI.
... View more
10-20-2017
03:29 PM
Interesting, I find it strange that existing HDF clusters cannot be converted to HDF+HDP. You would think that would be the most common situation when try to utilize sqoop or other technologies on a pre-configured HDF cluster. Do you have any links for HDP Mpacks?
... View more
10-20-2017
02:45 PM
2 Kudos
This info might be more helpful to guide you down the road of DR. With HDP in production, you must combine different technologies being offered and tailor these together as your own solution. I've read through many solutions, and the info below is the most critical in my opinion. Remember, preventing data loss is better than recovering from it! Read these slides first: https://www.slideshare.net/cloudera/hadoop-backup-and-disaster-recovery https://www.slideshare.net/hortonworks/ops-workshop-asrunon20150112/72 1. VM Snapshots If your not using VM's, then switch over Ambari nightly VM snapshots Namenode VM snapshots 2. Lockdown critical directories: fs.protected.directories - Under HDFS config in ambari Protect critical directories from deletion. There could be accidental deletes of the critical data-sets. These catastrophic errors should be avoided by adding appropriate protections. For example the /user directory is the parent of all user-specific sub-directories. Attempting to delete the entire /user directory is very likely to be unintentional. To protect against accidental data loss, mark the /user directory as protected. This prevents attempts to delete it unless the directory is already empty 3. Backups Backups can be automated using tools like Apache Falcon (being deprecated in HDP 3.0, switch to workflow editor + DistCp) and Apache Oozie Using Snapshots HDFS snapshots can be combined with DistCp to create the basis for an online backup solution. Because a snapshot is a read-only, point-in-time copy of the data, it can be used to back up files while HDFS is still actively serving application clients. Backups can even be automated using tools like Apache Falcon and Apache Oozie. Example: “Accidentally” remove the important file sudo -u hdfs hdfs dfs -rm -r -skipTrash /tmp/important-dir/important-file.txt Recover the file from the snapshot: hdfs dfs -cp /tmp/important-dir/.snapshot/first-snapshot/important-file.txt /tmp/important-dir hdfs dfs -cat /tmp/important-dir/important-file.txt HDFS Snapshots Overview A snapshot is a point-in-time, read-only image of the entire file system or a sub tree of the file system. HDFS snapshots are useful for:
Protection against user error: With snapshots, if a user accidentally deletes a file, the file can be restored from the latest snapshot that contains the file. Backup: Files can be backed up using the snapshot image while the file system continues to serve HDFS clients. Test and development: Files in an HDFS snapshot can be used to test new programs without affecting the HDFS file system that is concurrently supporting HDFS clients. Disaster recovery: Snapshots can be replicated to a remote recovery site for disaster recovery.
DistCp Overview Hadoop DistCp (distributed copy) can be used to copy data between Hadoop clusters or within a Hadoop cluster. DistCp can copy just files from a directory or it can copy an entire directory hierarchy. It can also copy multiple source directories to a single target directory. DistCp:
Uses MapReduce to implement its I/O load distribution, error handling, and reporting. Has built-in support for multiple file system types. It can work with HDFS, Amazon S3, Cassandra, and others. DistCp also supports copying between different HDFS versions. Can generate a significant workload on the cluster if a large volume of data is being transferred. Has many command options. Use hadoop distcp –help to get online command help information.
... View more
10-20-2017
02:06 PM
1 Kudo
Is it possible to install another MPack alongside the current HDF Mpack? The initial Ambari install was done with the --purge option. I want to add HDP components to HDF. Also, where are the links for the mpacks? Current Mpack: hdf-ambari-mpack-3.0.1.1-5.tar.gz
... View more
- Tags:
- ambari-server
Labels:
- Labels:
-
Apache Ambari
10-18-2017
06:36 PM
OK after finding this same bug on an HDF install there are several explanations: Installs were done via Ambari 2.4.2 Both installs version info removed all OS versions except Centos7 OS entry XML was custom entry from Hortonworks repo link Possible bugs contributing: https://issues.apache.org/jira/browse/AMBARI-19637 https://issues.apache.org/jira/browse/AMBARI-17285 https://issues.apache.org/jira/browse/AMBARI-18562 https://issues.apache.org/jira/browse/AMBARI-18350
... View more
10-18-2017
02:33 PM
Just noticed this when I click on current version under "Versions" and click my CURRENT install
... View more
10-18-2017
02:28 PM
I've tried IE 11, Chrome 56 & 57, as well as Edge. Is there a way to manually call the install API? All I need is that wizard to popup and do the install.
... View more
10-18-2017
02:19 PM
Here is the ouput (replaced ips with 127.0.0.1): hwoutput.txt
... View more
10-17-2017
06:47 PM
It appears that my Ambari HDP versions are mismatched. I actually have 2.5.0.0-1245 installed and it;s showing as 2.5.3.0. Under /usr/hdp/current only see 2.5.0.0-1245 I have a similar issue to: https://community.hortonworks.com/questions/75760/ambari-admin-stack-versions-hangs.html I cannot proceed to upgrade HDP... There is an error mentioning an OS property can't be found? When I click on current version under "Versions" and click my CURRENT install I get a blank page.
... View more
Labels:
- Labels:
-
Apache Ambari
09-28-2017
01:36 PM
Matt, do you know how to do this after already running the install command? This is what I'm getting from your command: ambari-server install-mpack --mpack=hdf-ambari-mpack-3.0.1.1-5.tar.gz --purge --verbose
Using python /usr/bin/python
Installing management pack
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Installing management pack hdf-ambari-mpack-3.0.1.1-5.tar.gz
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Download management pack to temp location /var/lib/ambari-server/data/tmp/hdf-ambari-mpack-3.0.1.1-5.tar.gz
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Expand management pack at temp location /var/lib/ambari-server/data/tmp/hdf-ambari-mpack-3.0.1.1-5/
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: AMBARI_SERVER_LIB is not set, using default /usr/lib/ambari-server
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: about to run command: /usr/jdk64/jdk1.8.0_112/bin/java -cp '/etc/ambari-server/conf:/usr/lib/ambari-server/*:/usr/share/java/mysql-connector-java.jar' org.apache.ambari.server.checks.MpackInstallChecker --mpack-stacks HDF
INFO:
process_pid=123456
CAUTION: You have specified the --purge option with --purge-list=['stack-definitions', 'mpacks']. This will replace all existing stack definitions, management packs currently installed.
Are you absolutely sure you want to perform the purge [yes/no]? (no)y
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Purging existing stack definitions and management packs
INFO: Purging stack location: /var/lib/ambari-server/resources/stacks
INFO: Purging mpacks staging location: /var/lib/ambari-server/resources/mpacks
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Stage management pack hdf-ambari-mpack-3.0.1.1-5 to staging location /var/lib/ambari-server/resources/mpacks/hdf-ambari-mpack-3.0.1.1-5
INFO: Processing artifact hdf-service-definitions of type service-definitions in /var/lib/ambari-server/resources/mpacks/hdf-ambari-mpack-3.0.1.1-5/common-services
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
Traceback (most recent call last):
File "/usr/sbin/ambari-server.py", line 941, in <module>
mainBody()
File "/usr/sbin/ambari-server.py", line 911, in mainBody
main(options, args, parser)
File "/usr/sbin/ambari-server.py", line 863, in main
action_obj.execute()
File "/usr/sbin/ambari-server.py", line 78, in execute
self.fn(*self.args, **self.kwargs)
File "/usr/lib/python2.6/site-packages/ambari_server/setupMpacks.py", line 896, in install_mpack
(mpack_metadata, mpack_name, mpack_version, mpack_staging_dir, mpack_archive_path) = _install_mpack(options, replay_mode)
File "/usr/lib/python2.6/site-packages/ambari_server/setupMpacks.py", line 792, in _install_mpack
process_service_definitions_artifact(artifact, artifact_source_dir, options)
File "/usr/lib/python2.6/site-packages/ambari_server/setupMpacks.py", line 515, in process_service_definitions_artifact
create_symlink(src_service_definitions_dir, dest_service_definitions_dir, file, options.force)
File "/usr/lib/python2.6/site-packages/ambari_server/setupMpacks.py", line 235, in create_symlink
create_symlink_using_path(src_path, dest_link, force)
File "/usr/lib/python2.6/site-packages/ambari_server/setupMpacks.py", line 247, in create_symlink_using_path
sudo.symlink(src_path, dest_link)
File "/usr/lib/python2.6/site-packages/resource_management/core/sudo.py", line 123, in symlink
os.symlink(source, link_name)
OSError: [Errno 17] File exists
... View more
09-27-2017
09:52 PM
My HDF install is showing HDP components... I've already run: ambari-server install-mpack --mpack=hdf-ambari-mpack-3.0.1.1-5.tar.gz --verbose
ambari-server install-mpack --mpack=hdf-ambari-mpack-3.0.1.1-5.tar.gz --verbose
Using python /usr/bin/python
Installing management pack
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Installing management pack hdf-ambari-mpack-3.0.1.1-5.tar.gz
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Download management pack to temp location /var/lib/ambari-server/data/tmp/hdf-ambari-mpack-3.0.1.1-5.tar.gz
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Expand management pack at temp location /var/lib/ambari-server/data/tmp/hdf-ambari-mpack-3.0.1.1-5/
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Stage management pack hdf-ambari-mpack-3.0.1.1-5 to staging location /var/lib/ambari-server/resources/mpacks/hdf-ambari-mpack-3.0.1.1-5
INFO: Processing artifact hdf-service-definitions of type service-definitions in /var/lib/ambari-server/resources/mpacks/hdf-ambari-mpack-3.0.1.1-5/common-services
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Symlink: /var/lib/ambari-server/resources/common-services/NIFI/1.0.0
INFO: Symlink: /var/lib/ambari-server/resources/common-services/NIFI/1.1.0
INFO: Symlink: /var/lib/ambari-server/resources/common-services/NIFI/1.2.0
INFO: Symlink: /var/lib/ambari-server/resources/common-services/REGISTRY/0.3.0
INFO: Symlink: /var/lib/ambari-server/resources/common-services/STREAMLINE/0.5.0
INFO: Processing artifact hdf-stack-definitions of type stack-definitions in /var/lib/ambari-server/resources/mpacks/hdf-ambari-mpack-3.0.1.1-5/stacks
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/2.0/configuration
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/2.0/hooks
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/2.0/kerberos.json
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/2.0/metainfo.xml
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/2.0/properties
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/2.0/repos
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/2.0/role_command_order.json
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/2.0/services/AMBARI_INFRA
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/2.0/services/AMBARI_METRICS
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/2.0/services/KAFKA
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/2.0/services/KERBEROS
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/2.0/services/LOGSEARCH
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/2.0/services/NIFI
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/2.0/services/RANGER
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/2.0/services/STORM
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/2.0/services/ZOOKEEPER
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/2.0/services/stack_advisor.py
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/2.0/upgrades
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/2.0/widgets.json
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/2.1/metainfo.xml
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/2.1/repos
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/2.1/services/KAFKA
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/2.1/services/NIFI
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/2.1/services/RANGER
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/2.1/services/STORM
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/2.1/services/ZOOKEEPER
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/2.1/services/stack_advisor.py
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/2.1/upgrades
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/3.0/configuration
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/3.0/metainfo.xml
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/3.0/properties
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/3.0/repos
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/3.0/services/KAFKA
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/3.0/services/NIFI
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/3.0/services/RANGER
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/3.0/services/REGISTRY
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/3.0/services/STORM
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/3.0/services/STREAMLINE
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/3.0/services/ZOOKEEPER
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/3.0/services/stack_advisor.py
INFO: Symlink: /var/lib/ambari-server/resources/stacks/HDF/3.0/upgrades
INFO: Processing artifact hdp-addon-services of type stack-addon-service-definitions in /var/lib/ambari-server/resources/mpacks/hdf-ambari-mpack-3.0.1.1-5/hdp-addon-services
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Adjusting file permissions and ownerships
INFO: about to run command: chmod -R 0755 /var/lib/ambari-server/resources/stacks
INFO:
process_pid=81323
INFO: about to run command: chown -R -L ambari /var/lib/ambari-server/resources/stacks
INFO:
process_pid=81324
INFO: about to run command: chmod -R 0755 /var/lib/ambari-server/resources/extensions
INFO:
process_pid=81325
INFO: about to run command: chown -R -L ambari /var/lib/ambari-server/resources/extensions
INFO:
process_pid=81326
INFO: about to run command: chmod -R 0755 /var/lib/ambari-server/resources/common-services
INFO:
process_pid=81327
INFO: about to run command: chown -R -L ambari /var/lib/ambari-server/resources/common-services
INFO:
process_pid=81328
INFO: about to run command: chmod -R 0755 /var/lib/ambari-server/resources/mpacks
INFO:
process_pid=81329
INFO: about to run command: chown -R -L ambari /var/lib/ambari-server/resources/mpacks
INFO:
process_pid=81330
INFO: about to run command: chmod -R 0755 /var/lib/ambari-server/resources/mpacks/cache
INFO:
process_pid=81331
INFO: about to run command: chown -R -L ambari /var/lib/ambari-server/resources/mpacks/cache
INFO:
process_pid=81332
INFO: about to run command: chmod -R 0755 /var/lib/ambari-server/resources/dashboards
INFO:
process_pid=81333
INFO: about to run command: chown -R -L ambari /var/lib/ambari-server/resources/dashboards
INFO:
process_pid=81335
INFO: about to run command: chown -R -L ambari /var/lib/ambari-server/resources/stacks
INFO:
process_pid=81338
INFO: about to run command: chown -R -L ambari /var/lib/ambari-server/resources/extensions
INFO:
process_pid=81340
INFO: about to run command: chown -R -L ambari /var/lib/ambari-server/resources/common-services
INFO:
process_pid=81341
INFO: about to run command: chown -R -L ambari /var/lib/ambari-server/resources/mpacks
INFO:
process_pid=81342
INFO: about to run command: chown -R -L ambari /var/lib/ambari-server/resources/mpacks/cache
INFO:
process_pid=81343
INFO: about to run command: chown -R -L ambari /var/lib/ambari-server/resources/dashboards
INFO:
process_pid=81344
INFO: Management pack hdf-ambari-mpack-3.0.1.1-5 successfully installed! Please restart ambari-server.
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
Ambari Server 'install-mpack' completed successfully.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Cloudera DataFlow (CDF)
09-05-2017
09:54 PM
How is your yarn queue setup? Can 'default' accept more than one job at a time?
... View more
08-30-2017
02:55 PM
Check your network firewall and local firewall, disable iptables to test.
... View more
08-29-2017
06:28 PM
If your password has any unique characters such as "&" it will break the XML The fix for this example would be changing the & to: "& amp;" without the space (this website will not show the correct value).
... View more