Member since
05-09-2016
39
Posts
23
Kudos Received
12
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
742 | 06-29-2017 03:11 PM | |
447 | 06-28-2017 12:06 PM | |
420 | 06-28-2017 08:28 AM | |
1442 | 06-21-2017 06:19 AM | |
360 | 06-13-2017 01:12 PM |
04-12-2018
11:07 AM
2 Kudos
@Amit Sehgal You can refer to HDP docs for OS support. Please find below for HDP 2.6.4. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_support-matrices/content/ch01.html
... View more
07-04-2017
11:12 AM
@Simran Kaur Please try with "mapreduce.job.priority" as "mapred.job.priority" is deprecated now.
... View more
07-04-2017
06:30 AM
@Simran Kaur You can create separate yarn queue for these jobs. You can use capacity scheduler or Yarn queue manager view in Ambari . https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.0.1/bk_ambari-views/content/using_the_capacity_scheduler_view.html Now you can use this queue for your high priority oozie jobs https://stackoverflow.com/questions/32438052/job-queue-for-hive-action-in-oozie Kindly let me know if this helps.
... View more
06-29-2017
03:11 PM
@npandey This can happen due to conflict in RuntineDelegate from Jersey in yarn client libs and the copy in spark's assembly jar. Please refer to below article for more information. https://community.hortonworks.com/articles/101145/spark-job-failure-with-javalanglinkageerror-classc.html
Also, note that hive-site.xml should contain only Spark related properties like metastore information. You can download this for spark job from download client configs option in Ambari.
Passing the complete file(/etc/hive/conf/hive-site.xml) may have ATS related related properties which can also cause this issue.
... View more
06-29-2017
09:45 AM
@Sebastien Chausson You can do this by adding --files in the spark-opts tag of your spark action. <spark-opts>--executor-memory 20G --num-executors 50 --files hdfs://(complete hdfs path)</spark-opts> As an alternative you could use a shell action and pass your spark submit command directly to it.
... View more
06-29-2017
09:29 AM
1 Kudo
@Sara Alizadeh You can use Falcon to run sqoop import periodically .Please find below link which explains this in detail. https://falcon.apache.org/ImportExport.html
... View more
06-28-2017
12:06 PM
1 Kudo
@Pranav Manwatkar Please make sure that below steps are done . These can possibly cause this behaviour. 1. Bootstrapping in target database. 2. Below properties need to be configured for Hive in both clusters. hive.metastore.event.listeners = org.apache.hive.hcatalog.listener.DbNotificationListener
hive.metastore.dml.events = true
... View more
06-28-2017
08:40 AM
1 Kudo
I am trying to load csv file to pyspark through the below query. sample = sqlContext.load(source="com.databricks.spark.csv", path = '/tmp/test/20170516.csv', header = True,inferSchema = True) But I am getting a error saying py4j.protocol.Py4JJavaError: An error occurred while calling o137.load.
: java.lang.ClassNotFoundException: Failed to find data source: com.databricks.spark.csv. Please find packages at http://spark-packages.org
... View more
06-28-2017
08:28 AM
1 Kudo
@nyadav That message is not causing your workflow to fail . Please find below article which explains the same.
https://community.hortonworks.com/questions/57384/oozie-mysql-error-hortonworks-25oozie-mysql-error.html Can you please provide oozie logs for the workflow scheduled oozie job -oozie http://<oozie host>:<oozie port>/oozie -log <WF_ID>
... View more
06-21-2017
06:19 AM
@Peter Kim The issue is caused by mismatch in hostnames in Ambari and Smartsense. You can use below doc to update the name in Ambari https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.0.3/bk_ambari-administration/content/ch_changing_host_names.html You can also try to deregister and register agent again with below steps If the host is no more accessible, use CURL command to unregister the host from SmartSense
For example, curl -u admin:admin -X PUT -d "{\"status\":\"UNREGISTERED\"}" http://HST_SERVER:9000/api/v1/agents/<AGENT_HOSTNAME>
In case a node in the HDP cluster is not listed by hst list-agents command, then it can be added to SmartSense using the following command or using Ambari Host UI add Service option: # hst setup-agent
Enter SmartSense Tool Server hostname: HST_SERVER
... View more
06-19-2017
02:42 PM
@Timothy Spann Since , you already have an existing mysql installation new mysql version is causing the conflict . Please try one of below options. 1. Use the existing mysql instance for the installation. 2. Use any other external DB like postgres etc.
... View more
06-19-2017
02:35 PM
@Gaurav VatsPlease run below query followed by Ambari server restart . Post that login using "admin" as password. UPDATE ambari.users SET user_password='538916f8943ec225d97a9a86a2c6ec0818c1cd400e09e03b660fdaaec4af29ddbb6f2b1033b81b00' WHERE user_name='admin';
... View more
06-13-2017
01:12 PM
@rakanchi Can you try to add this using falcon CLI using below command and update the complete mirror definition in the process xml file. falocn entity -type -submit -file <process.xml> The UI validation is causing it fail.
... View more
03-27-2017
10:31 AM
@nyadav Setting the root.acl properties resolved the issue.
... View more
03-27-2017
09:58 AM
Yarn Web Interface reporting '0' for many metrics for successfully completed applications. {
"id": "application_1480114771287_15299",
"user": "xx",
"name": "usr",
"queue": "default",
"state": "FINISHED",
"finalStatus": "SUCCEEDED",
"progress": 100,
"trackingUI": "History",
"trackingUrl": "http://<HOST>:8088/proxy/application_1480114771287_15299/",
"diagnostics": "",
"clusterId": 1480114771287,
"applicationType": "MAPREDUCE",
"applicationTags": "",
"startedTime": 0,
"finishedTime": 0,
"elapsedTime": 0,
"allocatedMB": 0,
"allocatedVCores": 0,
"runningContainers": 0,
"memorySeconds": 172160,
"vcoreSeconds": 49,
"queueUsagePercentage": 0,
"clusterUsagePercentage": 0,
"preemptedResourceMB": 0,
"preemptedResourceVCores": 0,
"numNonAMContainerPreempted": 0,
"numAMContainerPreempted": 0
}
... View more
- Tags:
- Hadoop Core
- YARN
Labels:
- Labels:
-
Apache YARN
03-27-2017
08:26 AM
@nyadav It appears that block size is different in your two clusters . You can set flags -preserveBlockSize or -skipChecksum as below . 1. Suspend all Falcon jobs .
2. Modify template for falcon mirroring at /usr/hdp/current/falcon-server/data-mirroring/workflows/hdfs-replication-workflow.xml .Add the following argument at the end in argument list to preserve the block size .
<arg>-preserveBlockSize</arg>
<arg>true</arg>
3. Restart Falcon through Ambari.
4. Resubmit the job and verify if the HDFS mirror job is working fine now. This will set this property for all mirror jobs.
... View more
03-27-2017
06:50 AM
1 Kudo
@kerra Can you obtain the feed definition from Falcon CLI to validate the same. $FALCON_HOME/bin/falcon entity -type [cluster|datasource|feed|process] -name <<name>> -definition https://falcon.apache.org/FalconCLI.html Also , you can check the job definition and job configuration from related oozie workflow.
... View more
03-22-2017
12:17 PM
@nyadav Thanks for the information . I was able to insert after creating the dataframe.
... View more
03-22-2017
08:45 AM
1 Kudo
@nyadav The permission for /var/run/ambari/server should be setup correctly ll /var/run/ambari-server/
total 12
-rw-r--r-- 1 root root 6 Jan 4 05:10 ambari-server.pid
drwxr-xr-x 4 root root 4096 Jan 4 05:26 bootstrap
drwxr-xr-x 39 root root 4096 Jan 17 18:14 stack-recommendations
Check and correct your permissions and try again.
... View more
03-22-2017
06:21 AM
1 Kudo
Not able to insert data into hive table through spark scala> sqlContext.sql("insert into table results_test_hive values('XXXXXXXXXX', 'm:X', 0.0)") Failed with below error, org.apache.spark.sql.AnalysisException: Unsupported language features in query: insert into table results_test_hive values('XXXXXXXXXX', 'm:X', 0.0)
... View more
- Tags:
- Data Science & Advanced Analytics
- Spark
- Upgrade to HDP 2.5.3 : ConcurrentModificationException When Executing Insert Overwrite : Hive
Labels:
- Labels:
-
Apache Spark
03-22-2017
06:19 AM
@nyadav I am getting below exception while running the below command, rmr /hbase-secure/table/hbase:acl Authentication is not valid : /hbase-secure/table/hbase:acl
... View more
03-21-2017
06:15 AM
@nyadav Yes, I'm able to create and list table from shell. Only issue is while granting permission. Also Ranger is not enabled.
... View more
03-21-2017
06:09 AM
3 Kudos
While granting user permission in HBase, it is failing with error. Recently we have de-kerberized and kerberized the Cluster. "ERROR ArgumentError: DISABLED: Security features are not available"
... View more
- Tags:
- HBase
Labels:
- Labels:
-
Apache HBase
03-20-2017
07:24 AM
1 Kudo
@Kalim Julia You need to set up below properties to delete. These configuration parameters must be set appropriately to turn on transaction support in Hive: Client Side
hive.support.concurrency – true hive.enforce.bucketing – true (Not required as of Hive 2.0) hive.exec.dynamic.partition.mode – nonstrict hive.txn.manager – org.apache.hadoop.hive.ql.lockmgr.DbTxnManager Server Side (Metastore)
hive.compactor.initiator.on – true (See table below for more details) hive.compactor.worker.threads – a positive number on at least one instance of the Thrift metastore service For further information refer to below link https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions#HiveTransactions-Transaction/LockManager
... View more
03-16-2017
10:35 AM
@Stwo X Do you which server or service is it trying to make a connection . Does it give any other information besides this. Also check below Apache JIRA https://issues.apache.org/jira/browse/ZEPPELIN-305
... View more
03-14-2017
12:59 PM
@Stwo X Are you running this on a Sandbox. Can you check for any network issues and all HDP services are running. Also check for exception in zeppelin-interpreter-spark. log .
... View more
03-14-2017
12:47 PM
@Kevin Gao It appears to be a I/O error . Please check if there is enough storage . Also validate the permissions as well
... View more
03-14-2017
12:38 PM
@n c Can you check the relevant exceptions in falcon.application.log . Also please find below articles https://community.hortonworks.com/questions/11862/falcon-ui-not-working.html https://community.hortonworks.com/questions/77600/faclon-web-ui-failing-with-http-503-service-unavai.html
... View more
03-14-2017
12:25 PM
2 Kudos
@Ashnee Sharma You can set up priorities for your mapred jobs using "set-priority job-idpriority" . More information in below link https://hadoop.apache.org/docs/r2.7.2/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredCommands.html#job You can also set up preemption to make sure high priority jobs get the desired resources . https://community.hortonworks.com/questions/8725/capacityscheduler-job-priority-preemption.html
... View more