Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1001 | 06-04-2025 11:36 PM | |
| 1568 | 03-23-2025 05:23 AM | |
| 785 | 03-17-2025 10:18 AM | |
| 2831 | 03-05-2025 01:34 PM | |
| 1863 | 03-03-2025 01:09 PM |
06-24-2019
10:15 PM
1 Kudo
@chethan mh The same HWX document categorically states "If you are running an earlier HDF version, upgrade to at lease HDF 3.1.0, and then proceed to the HDF 3.3.0 upgrade." only HDF 3.3.x and HDF 3.2.x can be directly upgraded to 3.4.0. So you will have a 3 step migration first to at least 3.1.0 then HDF 3.3.0 then finally to 3.4.0. If the effort the try export NiFi can export/import flows via templates. You can save your flow as a template (xml file), and import the template from a file as well. If you want to save the entire flow you have in the system, you can also find that in nifi/conf/flow.xml.gz on your nifi box. This is not a template but would be able to drop into a clean NiFi instance. Follow this link to nifi procedure Please revert
... View more
06-24-2019
10:00 PM
1 Kudo
@Spandan Mohanty YARN Registry DNS is a new component introduced in HDP 3.0 and usually runs on port 53. The most probable issue is the port is already in use by another process to check that follow the below steps Verify if the port 53 is available on the YARN host. # nc -l `hostname -f` 53 Ncat: bind to x.x.x.x:53: Address already in use. QUITTING. Else change the value of hadoop.registry.dns.bind-port and restart th registrydns Start the DNS Server yarn --daemon start registrydns Please revert
... View more
06-24-2019
09:18 PM
1 Kudo
@Michael Bronson The simple answer is NO ,HDP 3.1 supports only Ambari versi 2.7.3 according to the official HWX support matrix see screenshots below. During the process of upgrading to Ambari 2.7.3 and HDP 3.1.0, additional components will be added to your cluster, and deprecated services and views will be removed. Ambari 2.6.x to Ambari 2.7.3 The Ambari 2.6.x to Ambari 2.7.3 upgrade will remove the following views: Hive View 1.5,
Hive View 2 Hue To Ambari View Migration
Slider
Storm
Tez
Pig The Ambari Pig View is deprecated in HDP 3.0 and later. Ambari does not enable Pig View. To enable Pig View in HDP 3.0 and later, you need to contact Hortonworks support for instructions that include how to install WebHCat using an Ambari management pack. HDP 2.6.x to HDP 3.1.0 The HDP 2.6.x to HDP 3.1.0 upgrade will add the following components if YARN is deployed in the cluster being upgraded: YARN Registry DNS YARN Timeline Service V2.0 Reader The HDP 2.6.x to HDP 3.1.0 upgrade will remove the following services: Flume
Mahout
Falcon
Spark 1.6
Slider
WebHCat 2.6.4.PNG HDP3.1.PNG
... View more
06-20-2019
04:20 PM
@sundeep dande Can you be more precise? Are you dropping a Mysql; oracle, hive or hbase table? And please under which user were you executing the command,? Have you checked whether you have the correct privileges on the object(table) ie like drop,delete , update etc?
... View more
06-19-2019
06:19 AM
@Mighty Mike Normally you don't need to remember the passwords for these service users if you have root access or you are included in the sudoers files then you can switch to a user (su) using the below methods su - {username} sets up the shell environment as if it were a clean login as the specified user, it accesses and uses specified users environment variables, su {username} just starts a shell with current environment settings for the specified user. If the username is not specified with su and su -, the root account is implied as default. To check the usernames you run cat /etc/passwd Sample output livy:x:1013:1007::/home/livy:/bin/bash
spark:x:1014:1007::/home/spark:/bin/bash
ambari-qa:x:1015:1007::/home/ambari-qa:/bin/bash
kafka:x:1016:1007::/home/kafka:/bin/bash
hdfs:x:1017:1007::/home/hdfs:/bin/bash
sqoop:x:1018:1007::/home/sqoop:/bin/bash
yarn:x:1019:1007::/home/yarn:/bin/bash The encrypted passwords and other information such as password expiry information (the password aging information) are stored in /etc/shadow file. All fields are separated by a colon (:) symbol. It contains one entry per line for each user listed in /etc/passwd Sample output livy:!!:17900:0:99999:7:::
spark:!!:17900:0:99999:7:::
ambari-qa:!!:17900:0:99999:7:::
kafka:!!:17900:0:99999:7:::
hdfs:!!:17900:0:99999:7:::
sqoop:!!:17900:0:99999:7:::
yarn:!!:17900:0:99999:7:::
mapred:!!:17900:0:99999:7:::
hbase:!!:17900:0:99999:7:::
knox:!!:17900:0:99999:7::: Hope that helps
... View more
06-17-2019
02:43 AM
@Michael Bronson Whenever you change the parameter this config the cluster needs to be aware of the changes. When you start Ambari the underlying components don't get started unless you explicitly start those components! So you can start Ambari without stating YARN or HDFS
... View more
06-16-2019
10:26 PM
1 Kudo
@Michael Bronson Parameter dfs.namenode.fs-limits.max-directory-items determines the maximum number of folders or files (not recursive) in one directory. The value range of this parameter is 1 to 6400000, and the default value is 1048576. Increase the value of parameter dfs.namenode.fs-limits.max-directory-items, and then restart the Ambari so that the new value takes effect. Workaround Go to Ambari -> HDFS -> Configs -> Advanced -> Custom hdfs-site and add the key (dfs.namenode.fs-limits.max-directory-items) to i.e double 1048576 to 2097152 you cannot set dfs.namenode.fs-limits.max-directory-items to a value less than 1 or greater than 6400000 After an Ambari restart the config should be pushed to the whole cluster this will allow you to work HTH
... View more
06-15-2019
05:52 PM
@choppadandi vamshi krishna With Hive 3.0 you have hive and druid storage options too and the orc is the most common. I haven't tested and can't confirm whether it's possible to create an MV over Avro and refresh it a regular interval would work You can also use the rebuild option to refresh the MV when scripting run the rebuild which will overwrite the previous mv before querying it so you have an updated ALTER MATERIALIZED VIEW mv REBUILD; You also have the Druid storage org .apache.hadoop.hive.druid.DruidStorageHandler or you can rebuild an MV like every 5 minutes but you should take into account that every rebuild will take longer than the previous due to the addition of data in the source table. HTH
... View more
06-15-2019
02:16 PM
@choppadandi vamshi krishna You can only create a materialized view on transactional tables. where changes in the base table will be logged and there is a refresh mechanism to update the materialized view whenever the view is queried Please, can you check whether the base table is transactional? Below are steps to help you determine that. The assumption below is your table cars is in the default database. # hive -e "describe extended <Database>.<tablename>;" | grep "transactional=true" If you get an output with the string that you grep for, then the table is transactional Example: #hive -e "describe extended default.cars;" | grep "transactional=true" Else Alter the flat table to make it transactional. ALTER TABLE cars SET TBLPROPERTIES ('transactional'='true'); Then try creating the materialized view again it should succeed Please revert
... View more
06-14-2019
07:30 PM
@Michael Bronson Is all good?
... View more