Member since
09-24-2015
816
Posts
488
Kudos Received
189
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3173 | 12-25-2018 10:42 PM | |
| 14193 | 10-09-2018 03:52 AM | |
| 4764 | 02-23-2018 11:46 PM | |
| 2481 | 09-02-2017 01:49 AM | |
| 2914 | 06-21-2017 12:06 AM |
06-18-2016
01:54 AM
Try running /root/start_ambari.sh, check "ambari-server status" and if Ambari is running retry access from the browser.
... View more
06-18-2016
01:22 AM
2 Kudos
Since service checks of all major services were successful, you can just select "Ignore and Proceed" and handle the Tez issue once the update is over.
... View more
06-18-2016
12:50 AM
1 Kudo
Hi @Artem Ervits, you are always on the cutting edge of new technologies! Regarding your question how about one of ST_*FromWKB functions, like for example ST_GeomFromWKB. There are more details about Well-known binary here.
... View more
06-18-2016
12:26 AM
First of all, next time stop all services from Ambari before rebooting the machines. At this point you can try to force hdfs out of safe mode by setting dfs.namenode.safemode.threshold-pct either to 0.0f or to 0.8f (since 1148/1384 = 82.9%) and restarting hdfs. When hdfs starts and is out of safe mode, locate corrupted files with fsck and delete them, and restore important ones using HDP distribution files in /usr/hdp/current. There is no need (and in some cases no way) to restore those related to ambari-qa and in /tmp, and some others.
... View more
06-17-2016
11:32 PM
1 Kudo
You don't need a browser in the Sandbox, just launch a browser like Firefox from your PC, and visit http://localhost:8080
... View more
06-02-2016
12:13 PM
1 Kudo
Yes, the order of elements in the xml workflow file is important, see below, "sequence" means that order must be exactly as given. Elements marked with minOccurs="0" are optional and can be omitted. <xs:complexType name="ACTION">
<xs:sequence>
<xs:element name="job-tracker" type="xs:string" minOccurs="1" maxOccurs="1"/>
<xs:element name="name-node" type="xs:string" minOccurs="1" maxOccurs="1"/>
<xs:element name="prepare" type="sqoop:PREPARE" minOccurs="0" maxOccurs="1"/>
<xs:element name="job-xml" type="xs:string" minOccurs="0" maxOccurs="unbounded"/>
<xs:element name="configuration" type="sqoop:CONFIGURATION" minOccurs="0" maxOccurs="1"/>
<xs:choice>
<xs:element name="command" type="xs:string" minOccurs="1" maxOccurs="1"/>
<xs:element name="arg" type="xs:string" minOccurs="1" maxOccurs="unbounded"/>
</xs:choice>
<xs:element name="file" type="xs:string" minOccurs="0" maxOccurs="unbounded"/>
<xs:element name="archive" type="xs:string" minOccurs="0" maxOccurs="unbounded"/>
</xs:sequence>
</xs:complexType>
... View more
06-01-2016
11:54 PM
There is a bug in hdfs replication workflow FALCON-189 causing this. As a quick fix, edit drSourceDir to include source FS, so that it become: ${drSourceClusterFS}${drSourceDir} as discussed in the Jira. The file to fix is /usr/hdp/current/falcon-server/data-mirroring/workflows/hdfs-replication-workflow.xml on your Falcon server. Then upload the file to /apps/data-mirroring/workflows in HDFS. Restart Falcon and retry to run mirroring from the target cluster.
... View more
05-29-2016
02:39 PM
That error is thrown when /usr/hdp/2.4.0.0-169/zookeeper/conf doesn't exist, and also when it exists but is not a symlink. Try to organize the links manually in this way, and move the files to /etc/zookeeper/2.4.0.0-169/0 /etc/zookeeper/conf -> /usr/hdp/current/zookeeper-client/conf
/usr/hdp/current/zookeeper-client/conf -> /etc/zookeeper/2.4.0.0-169/0
... View more
05-29-2016
01:43 PM
Hi @mayki wogno, I see that you marked your question as "Resolved". If my answer below helped you can you please accept it. If you resolved your issue by other means please publish them, and we'll accept your answer. Instead of marking questions as "resolved" we consider them resolved if they are accepted. Tnx!
... View more
05-28-2016
04:43 AM
1 Kudo
You can install Spark manually and run it like that, without telling Ambari, or you can register SHS and Spark clients using Ambari REST API. That's the easiest way. Otherwise, removing Hive dependency would require changing some Ambari files.
... View more