Member since
05-23-2017
40
Posts
5
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2870 | 11-08-2019 10:23 PM | |
1656 | 10-16-2019 10:37 AM | |
2062 | 10-15-2019 11:51 PM | |
910 | 08-27-2019 03:57 AM |
11-16-2019
08:42 PM
@Shriniwas - You can use the "hostname -i" to check the IP address of the hostname.
... View more
11-12-2019
06:52 AM
@Priyan - You can use the below command to get the details. yarn applicationattempt -list <app_number> You can refer below document to create the command based upon the use case. https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YarnCommands.html#applicationattempt
... View more
11-08-2019
10:47 PM
@Eric_B - This scenario on updating a table (even two different rows) by two different processes at the same time is not possible at the moment for ACID tables. Currently, ACID concurrency management mechanism works at a partition level for partitioned tables and table level for non partitioned (which I believe is our case). Basically what the system wants to prevent is 2 parallel transactions updating the same row. Unfortunately, it can't keep track of this at individual row level, it does it at partition and table level respectively.
... View more
11-08-2019
10:38 PM
@Tomas79 -
I am not sure, what you are asking is possible. But you can control the size of the same using below properties.
We could set below parameters to restrict the size of local directories under NM and trigger DeletionService when the limit is reached.
# yarn.nodemanager.delete.thread-count: # yarn.nodemanager.localizer.cache.target-size-mb # yarn.nodemanager.localizer.cache.cleanup.interval-ms
Details under: https://blog.cloudera.com/resource-localization-in-yarn-deep-dive/
The yarn.nodemanager.localizer.cache.target-size-mb property defines decides the maximum disk space to be used for localizing resources. Once the total disk size of the cache exceeds the value defined in this property the deletion service will try to remove files which are not used by any running containers.
The yarn.nodemanager.localizer.cache.cleanup.interval-ms: defines this interval for the delete the unused resources if total cache size exceeds the configured max-size. Unused resources are those resources which are not referenced by any running container.
... View more
11-08-2019
10:23 PM
@sampathkumar_ma - In HDP, Hive's execution engine only supports MapReduce & Tez. Running with Spark is not supported in HDP at this current moment in time.
... View more
10-22-2019
07:30 PM
@HadoopHelp - It seems like the "load data inpath" command is same in both the case. Please check if you shared it by mistake. Also let me know the error message you are getting while uploading the table.
... View more
10-17-2019
01:31 AM
@rohan_kulkarni - The Tez UI relies on the Application Timeline Server whose role is as a backing store for the application data generated during the lifetime of a YARN application. You can refer below article for more information on this: https://tez.apache.org/tez-ui.html
... View more
10-17-2019
01:24 AM
@rohan_kulkarni - If you are using HDP 2.6.5 or older version, then you can check the same from the Tez View. Tez UI has two tabs, "Hive Queries" and "All DAGS". Hive queries shows the query start and end time, And ALL DAGS show all the information about the DAGS. Can you please check and confirm, if this is correct for you. Yarn UI shows the Application start and end time. Or else you need to grep with the keywords to get the query details from the HiveServer2 logs.
... View more
10-16-2019
10:37 AM
1 Kudo
@RP3003 - It's not possible to upgrade the individual component. You have to use the version which is shipped with that particular HDP version. Or else you have to upgrade the cluster to required version. If your question is answered then, Please make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
10-16-2019
07:21 AM
@RP3003 - Atlas 0.8.0 comes with HDP 2.6.5. If you are looking for Atlas 2.0.0 then you have to install or upgrade to HDP 3.1.4 which is the latest version of HDP. https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.6.5/bk_release-notes/content/comp_versions.html https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.4/release-notes/content/comp_versions.html
... View more