Member since
07-01-2015
460
Posts
78
Kudos Received
43
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1360 | 11-26-2019 11:47 PM | |
1309 | 11-25-2019 11:44 AM | |
9517 | 08-07-2019 12:48 AM | |
2193 | 04-17-2019 03:09 AM | |
3520 | 02-18-2019 12:23 AM |
02-14-2019
02:38 PM
AFAIK, the struts problem is a false positive because you can't get that port to run example exploit code. https://blog.appsecco.com/detecting-and-exploiting-the-java-struts2-rest-plugin-vulnerability-cve-2017-9805-765773921d3d Has anyone got a solution to the Tomcat NFS upgrade problem. That looks tricky.
... View more
02-05-2019
08:35 AM
Altus Director does have some limited ability to use an existing CM server (see the notes in https://github.com/cloudera/director-scripts/blob/master/configs/aws.reference.conf#L487-L490). There are gaps in what CM related features will work, especially around TLS/SSL and external databases. Do you happen to have a backup of the old Director's database? If so, you could use that to restore the state on a new Director server. For either of the above paths, I would recommend opening a support case with Cloudera if you need to pursue this for a production cluster.
... View more
01-29-2019
09:13 AM
Note: The below process is easier if the node is a gateway node. The correct Spark version and the directories will be readily available for mounting to the docker container. The quick and dirty way is to have an installation of Spark which matches your cluster's major version installed or mounted in the docker container. As well, you will need to mount the yarn and hadoop configuration directories in the docker container. Mounting these will prevent you from needing to set a ton of config on submission. Eg: "spark.hadoop.yarn.resourcemanager.hostname","XXX" Often these both can be set to the same value: /opt/cloudera/parcels/SPARK2/lib/spark2/conf/yarn-conf. The SPARK_CONF_DIR, HADOOP_CONF_DIR and YARN_CONF_DIR environment variables need to reference be set if using spark-submit. If using SparkLauncher, they can be set like so: val env = Map( "HADOOP_CONF_DIR" -> "/example/hadoop/path", "YARN_CONF_DIR" -> "/example/yarn/path" ) val launcher = new SparkLauncher(env.asJava).setSparkHome("/path/to/mounted/spark") If submitting to a kerberized cluster, the easiest way is to mount a keytab file and the /etc/krb5.conf file in the docker container. Set the principal and keytab using spark.yarn.principal and spark.yarn.keytab, respectively. For ports, 8032 of the Spark Master's (Yarn ResourceManager External) definitely needs to be open to traffic from the docker node. I am not sure if this is the complete list of ports - could another user verify?
... View more
01-26-2019
08:07 AM
Another approach of inserting the data which we are following in our project is not to insert the data in HIVE directly from SPARK instead do the following. 1. Read the input csv file in SPARK and do the transformation of the data according to requirement. 2. Save the data back into an output csv file in HDFS 3. Push the data from the output csv into HIVE using HIVE -f or HIVE -e command from shell.
... View more
01-22-2019
12:13 AM
1 Kudo
Annoying bug, but very simple solution, made a mistake in KrbHostFQDN. That should be the impalad fqdn.
... View more
01-06-2019
07:32 PM
Hi Tomas79, Thanks for the reply, Strangely I see swapping not set to what Cloudera recommends, this is my new role seems a lot needs to be done ! Thanks Wert.
... View more
01-02-2019
07:59 AM
I think it is indexed in the internal search index of the Hue service. But this is just a guess.. did not checked the source code.
... View more
12-15-2018
11:57 PM
So the problem was with Snapshots. I had configured snapshots a long time ago on the /user/hive/warehouse directory, and they were still being generated. I was finding the space using the commands hadoop fs -du -h /user/hive hadoop fs -du -h /user/hive/warehouse Snapshot directories can be found using command: hdfs lsSnapshottabledir hadoop fs -delteSnapshot <path without .snapshot> <snapshotname>
... View more
12-13-2018
05:36 AM
Congratulations on solving the issue. Can you please mark the appropriate reply as the solution to your issue. This way it can assist others facing a similar situation.
... View more
12-12-2018
12:58 AM
Tomas that's great. Exactly what I was after. Many thanks.
... View more