Member since
02-01-2019
650
Posts
143
Kudos Received
117
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2612 | 04-01-2019 09:53 AM | |
1376 | 04-01-2019 09:34 AM | |
6474 | 01-28-2019 03:50 PM | |
1484 | 11-08-2018 09:26 AM | |
3610 | 11-08-2018 08:55 AM |
06-29-2017
07:11 PM
@Sami Ahmad Can you try javac -cp `hadoop classpath`:/usr/hdp/2.5.3.0-37/hive2/lib/* HiveAlterRenameTo.java
... View more
06-29-2017
07:07 PM
@Shashank Chandhok Looks like atlas is using jaas to connect to kafka, You'd need to remove this properties as the cluster is unsecured. /var/lib/ambari-server/resources/scripts/configs.sh -u admin -p admin delete localhost <cluster> sqoop.atlas.application.properties atlas.jaas.KafkaClient.option.renewTicket
/var/lib/ambari-server/resources/scripts/configs.sh -u admin -p admin delete localhost <cluster> sqoop.atlas.application.properties atlas.jaas.KafkaClient.option.useTicketCache
... View more
06-29-2017
07:05 PM
@Harshil Gala spark-shell --jars <path-to-jar> --master yarn --deploy-mode client/cluster (you can choose client or accordingly)
... View more
06-29-2017
06:55 PM
1 Kudo
@Shashank Chandhok Can you post your "Advanced sqoop-atlas-application.properties" here?
... View more
06-29-2017
06:26 PM
1 Kudo
I created a cluster which was force terminated from the Web UI. Now resources are left in Cloudbreak (blueprint, recipes, network etc). Trying to delete these from the UI or shell does not work and results in the following error:
cloudbreak-shell>blueprint delete --id 420
Command failed java.lang.RuntimeException: There are clusters associated with blueprint ‘420’. Please remove these before deleting the blueprint.
There are clusters associated with blueprint '420'. Please remove these before deleting the blueprint.
java.lang.RuntimeException: There are clusters associated with blueprint '420'. Please remove these before deleting the blueprint.
at com.sequenceiq.cloudbreak.shell.transformer.ExceptionTransformer.transformToRuntimeException(ExceptionTransformer.java:21)
at com.sequenceiq.cloudbreak.shell.commands.common.BlueprintCommands.delete(BlueprintCommands.java:230)
at com.sequenceiq.cloudbreak.shell.commands.common.BlueprintCommands.deleteById(BlueprintCommands.java:237)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:216)
at org.springframework.shell.core.SimpleExecutionStrategy.invoke(SimpleExecutionStrategy.java:68)
at org.springframework.shell.core.SimpleExecutionStrategy.execute(SimpleExecutionStrategy.java:59)
at org.springframework.shell.core.AbstractShell.executeCommand(AbstractShell.java:134)
at org.springframework.shell.core.JLineShell.promptLoop(JLineShell.java:533)
at org.springframework.shell.core.JLineShell.run(JLineShell.java:179)
at java.lang.Thread.run(Thread.java:745)
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak
06-25-2017
04:32 PM
@Jeff Watson On the spark client node, create a symbolic link of 'hbase-site.xml' into /etc/spark/conf/ ln -s /etc/hbase/conf/hbase-site.xml /etc/spark/conf/hbase-site.xml Add the following configurations in 'spark-defaults.conf' through Ambari and restart the Spark service: spark.executor.extraClassPath /usr/hdp/current/hbase-client/lib/hbase-common.jar:/usr/hdp/current/hbase-client/lib/hbase-client.jar:/usr/hdp/current/hbase-client/lib/hbase-server.jar:/usr/hdp/current/hbase-client/lib/hbase-protocol.jar:/usr/hdp/current/hbase-client/lib/guava-12.0.1.jar:/usr/hdp/current/hbase-client/lib/htrace-core-3.1.0-incubating.jar:/usr/hdp/current/spark-client/lib/spark-assembly-1.6.1.2.4.2.0-258-hadoop2.7.1.2.4.2.0-258.jar:/usr/hdp/current/phoenix-client/phoenix-client.jar
spark.driver.extraClassPath /usr/hdp/current/hbase-client/lib/hbase-common.jar:/usr/hdp/current/hbase-client/lib/hbase-client.jar:/usr/hdp/current/hbase-client/lib/hbase-server.jar:/usr/hdp/current/hbase-client/lib/hbase-protocol.jar:/usr/hdp/current/hbase-client/lib/guava-12.0.1.jar:/usr/hdp/current/hbase-client/lib/htrace-core-3.1.0-incubating.jar:/usr/hdp/current/spark-client/lib/spark-assembly-1.6.1.2.4.2.0-258-hadoop2.7.1.2.4.2.0-258.jar:/usr/hdp/current/phoenix-client/phoenix-client.jar Note: Change the jar versions according to the cluster version and ensure that there is no space between the jars for the classpath
For secure clusters, obtain a kerberos ticket using kinit command. Launch spark shell using below command: spark-shell --master yarn-client --num-executors 2 --driver-memory 512m --executor-memory 512m --executor-cores 1
To access Phoenix table, use the following sample code : val df = sqlContext.load(
"org.apache.phoenix.spark",
Map("table" -> "TABLE1", "zkUrl" -> "<zk-host>:2181")
)
df.show()
... View more
06-08-2017
09:08 AM
@sai ram Here is the "Cancel Subscription"
... View more
06-08-2017
09:06 AM
@sai ram Here is the "Cancel Subscription" button.
... View more
06-07-2017
06:34 PM
@chandramouli muthukumaran Could you please add the complete error trace?
... View more