Member since
03-02-2020
16
Posts
0
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1231 | 07-03-2020 08:19 AM | |
1563 | 04-16-2020 02:56 AM | |
1921 | 03-26-2020 05:21 AM |
07-03-2020
08:19 AM
Looks jps will work, only if the jdk is installed. As in my environment only jre is installed. so tried installing jdk like below yum list java*devel* Loaded plugins: langpacks, ulninfo Available Packages java-1.6.0-openjdk-devel.x86_64 1:1.6.0.41-1.13.13.1.el7_3 ol7_latest java-1.7.0-openjdk-devel.x86_64 1:1.7.0.261-2.6.22.2.0.1.el7_8 ol7_latest java-1.8.0-openjdk-devel.i686 1:1.8.0.252.b09-2.el7_8 ol7_latest java-1.8.0-openjdk-devel.x86_64 1:1.8.0.252.b09-2.el7_8 ol7_latest [root@cloudera opt]# yum install java-1.8.0-openjdk-devel.x86_64 Now i can issue jps and it shows all the hadoop services running. Thanks Regards, GTA
... View more
04-18-2020
10:03 AM
Hi friends, Im in CDH 6.2 and hence trying to check jps command through cmd prompt. Not sure why it is not showing up hadoop services on issuing it and hence returning with below error
[root@cloudera bin]# jps bash: jps: command not found...
Kindly help me out with this. Regards, GTA
... View more
Labels:
04-16-2020
02:56 AM
Hi guys, I found that, the reason for the solr to not accept my date Field is due to the missing date Field type allocation to the Field name: request date.in my schema.xml file. After adding the below date field type, i could successfully create the solr collection <fieldType name="date" class="solr.DateRangeField" ></fieldType> Thanks, Regards, GTA
... View more
04-16-2020
01:02 AM
Thanks a lot for your reply and for your solution Tony:-) Regards, GTA
... View more
04-09-2020
05:28 AM
Hi friends,
I have cloudera trail version 6.2. In the command prompt when i tried to initiate spark shell using
spark-shell, im getting the below error:
[root@cloudera tmp]# spark-shell Setting default log level to "WARN". To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). 20/04/09 08:19:33 ERROR spark.SparkContext: Error initializing SparkContext. java.lang.IllegalArgumentException: Required executor memory (1024), overhead (384 MB), and PySpark memory (0 MB) is above the max threshold (1024 MB) of this cluster! Please check the values of 'yarn.scheduler.maximum-allocation-mb' and/or 'yarn.nodemanager.resource.memory-mb'. at org.apache.spark.deploy.yarn.Client.verifyClusterResources(Client.scala:345) at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:179) at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:60) at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:184) at org.apache.spark.SparkContext.<init>(SparkContext.scala:511) at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2549) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:944) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:935) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:935) at org.apache.spark.repl.Main$.createSparkSession(Main.scala:106) at $line3.$read$$iw$$iw.<init>(<console>:15) at $line3.$read$$iw.<init>(<console>:43) at $line3.$read.<init>(<console>:45) at $line3.$read$.<init>(<console>:49) at $line3.$read$.<clinit>(<console>) at $line3.$eval$.$print$lzycompute(<console>:7) at $line3.$eval$.$print(<console>:6) at $line3.$eval.$print(<console>) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:793) at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1054) at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:645) at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:644) at scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31) at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19) at scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:644) at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:576) at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:572) at scala.tools.nsc.interpreter.IMain$$anonfun$quietRun$1.apply(IMain.scala:231) at scala.tools.nsc.interpreter.IMain$$anonfun$quietRun$1.apply(IMain.scala:231) at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:221) at scala.tools.nsc.interpreter.IMain.quietRun(IMain.scala:231) at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1.apply(SparkILoop.scala:109) at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1.apply(SparkILoop.scala:109) at scala.collection.immutable.List.foreach(List.scala:392) at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply$mcV$sp(SparkILoop.scala:109) at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:109) at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:109) at scala.tools.nsc.interpreter.ILoop.savingReplayStack(ILoop.scala:91) at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:108) at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1$1.apply$mcV$sp(SparkILoop.scala:211) at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1$1.apply(SparkILoop.scala:199) at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1$1.apply(SparkILoop.scala:199) at scala.tools.nsc.interpreter.ILoop$$anonfun$mumly$1.apply(ILoop.scala:189) at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:221) at scala.tools.nsc.interpreter.ILoop.mumly(ILoop.scala:186) at org.apache.spark.repl.SparkILoop$$anonfun$process$1.org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1(SparkILoop.scala:199) at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$startup$1$1.apply(SparkILoop.scala:267) at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$startup$1$1.apply(SparkILoop.scala:247) at org.apache.spark.repl.SparkILoop$$anonfun$process$1.withSuppressedSettings$1(SparkILoop.scala:235) at org.apache.spark.repl.SparkILoop$$anonfun$process$1.startup$1(SparkILoop.scala:247) at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply$mcZ$sp(SparkILoop.scala:282) at org.apache.spark.repl.SparkILoop.runClosure(SparkILoop.scala:159) at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:182) at org.apache.spark.repl.Main$.doMain(Main.scala:78) at org.apache.spark.repl.Main$.main(Main.scala:58) at org.apache.spark.repl.Main.main(Main.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:851) at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167) at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195) at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86) at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:926) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:935) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 20/04/09 08:19:33 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered! 20/04/09 08:19:33 WARN metrics.MetricsSystem: Stopping a MetricsSystem that is not running 20/04/09 08:19:33 ERROR repl.Main: Failed to initialize Spark session. java.lang.IllegalArgumentException: Required executor memory (1024), overhead (384 MB), and PySpark memory (0 MB) is above the max threshold (1024 MB) of this cluster! Please check the values of 'yarn.scheduler.maximum-allocation-mb' and/or 'yarn.nodemanager.resource.memory-mb'. at org.apache.spark.deploy.yarn.Client.verifyClusterResources(Client.scala:345) at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:179) at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:60) at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:184) at org.apache.spark.SparkContext.<init>(SparkContext.scala:511) at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2549) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:944) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:935) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:935) at org.apache.spark.repl.Main$.createSparkSession(Main.scala:106) at $line3.$read$$iw$$iw.<init>(<console>:15) at $line3.$read$$iw.<init>(<console>:43) at $line3.$read.<init>(<console>:45) at $line3.$read$.<init>(<console>:49) at $line3.$read$.<clinit>(<console>) at $line3.$eval$.$print$lzycompute(<console>:7) at $line3.$eval$.$print(<console>:6) at $line3.$eval.$print(<console>) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:793) at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1054) at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:645) at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:644) at scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31) at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19) at scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:644) at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:576) at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:572) at scala.tools.nsc.interpreter.IMain$$anonfun$quietRun$1.apply(IMain.scala:231) at scala.tools.nsc.interpreter.IMain$$anonfun$quietRun$1.apply(IMain.scala:231) at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:221) at scala.tools.nsc.interpreter.IMain.quietRun(IMain.scala:231) at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1.apply(SparkILoop.scala:109) at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1.apply(SparkILoop.scala:109) at scala.collection.immutable.List.foreach(List.scala:392) at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply$mcV$sp(SparkILoop.scala:109) at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:109) at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:109) at scala.tools.nsc.interpreter.ILoop.savingReplayStack(ILoop.scala:91) at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:108) at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1$1.apply$mcV$sp(SparkILoop.scala:211) at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1$1.apply(SparkILoop.scala:199) at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1$1.apply(SparkILoop.scala:199) at scala.tools.nsc.interpreter.ILoop$$anonfun$mumly$1.apply(ILoop.scala:189) at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:221) at scala.tools.nsc.interpreter.ILoop.mumly(ILoop.scala:186) at org.apache.spark.repl.SparkILoop$$anonfun$process$1.org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1(SparkILoop.scala:199) at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$startup$1$1.apply(SparkILoop.scala:267) at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$startup$1$1.apply(SparkILoop.scala:247) at org.apache.spark.repl.SparkILoop$$anonfun$process$1.withSuppressedSettings$1(SparkILoop.scala:235) at org.apache.spark.repl.SparkILoop$$anonfun$process$1.startup$1(SparkILoop.scala:247) at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply$mcZ$sp(SparkILoop.scala:282) at org.apache.spark.repl.SparkILoop.runClosure(SparkILoop.scala:159) at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:182) at org.apache.spark.repl.Main$.doMain(Main.scala:78) at org.apache.spark.repl.Main$.main(Main.scala:58) at org.apache.spark.repl.Main.main(Main.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:851) at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167) at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195) at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86) at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:926) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:935) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Not sure, the reason besides above error. Kindly help me out.
Regards,
GTA
... View more
Labels:
- Labels:
-
Apache Spark
-
Apache YARN
03-28-2020
01:23 AM
Hi Li, Thanks for explaining the detailed steps. This helps!!! Regards, GTA
... View more
03-27-2020
12:34 AM
Hi Lu, My trial license still have 21 days more to go. Actually im wondering of how to navigate to the cloudera navigator console. The only way im trying is im logging into my cloudera manager and then clicking the clusters to see the option of navigator launch url which im not able to find. Im missing with the navigation to the cloudera navigator and that is where i stuck. Regards, GTA
... View more
03-26-2020
11:03 PM
Hi Li, Thanks for the reply. I do have cloudera enterprise trial version. Hope cloudera navigator will be not be accessible in the trail version. Brgds, GTA
... View more
03-26-2020
07:27 AM
Hi friends,
I have CDH 6.2.1 setted up in my local server. I can login to cloudera manager to monitor all the components and its running status. Meanwhile im trying to found how to access Cloudera navigator and stuck with it. Can you guide me how to access the same.
Even i followed the below steps, it dint help as i couldnt locate the navigator
How can I access the Cloudera Navigator console?
The Cloudera Navigator console can be accessed from the Cloudera Manager Admin Console or directly on the Navigator Metadata Server instance. Using the Cloudera Manager Admin Console as a starting point requires the Cloudera Manager roles of either Navigator Administrator or Full Administrator.
From the Cloudera Manager Admin Console:
Select Clusters > Cluster-n .
Select Cloudera Navigator from the menu.
To access the Cloudera Navigator console directly:
Open the browser to the host name of the node in the cluster that runs the Cloudera Navigator Metadata Server. For example, if node 2 in the cluster is running the Navigator Metadata Server role, the URL to the Cloudera Navigator console (assuming the default port of 7187) might look as follows
http://fqdn-2.example.com:7187/login.html
:Log in to the Cloudera Navigator console using the credentials assigned by your administrator.
Regards, GTA
... View more
Labels:
- Labels:
-
Cloudera Manager
-
Cloudera Navigator
03-26-2020
05:21 AM
Hi Venkat, Thanks for the reply and sorry for my delayed reply though. As i understand, that i tried creating solr collection from the wrong path(instead of solr_home). On correcting this im able to creation collection. Regards, GTA
... View more
03-26-2020
05:13 AM
Hi friends,
After adding my field tags in the schema.xml file like below
<field name="_version_" type="plong" indexed="true" stored="true" multiValued="false" /> <field name="id" type="string" indexed="true" stored="true" required="true" multiValued="false" /> <field name="ip" type="text_general" indexed="true" stored="true"/> <field name="request_date" type="date" indexed="true" stored="true"/> <field name="request" type="text_general" indexed="true" stored="true"/> <field name="department" type="string" indexed="true" stored="true" multiValued="false"/> <field name="category" type="string" indexed="true" stored="true" multiValued="false"/> <field name="product" type="string" indexed="true" stored="true" multiValued="false"/> <field name="action" type="string" indexed="true" stored="true" multiValued="false"/>
And when i tried to create solr collection it is returning error like below
Unable to create core [live_logs15_shard1_replica_n1] Caused by: Unknown fieldType 'date' specified on field request_date"}}
It is not accepting date datatype that i specified. Suppose instead of date datatype if i specify string for the field request_date means, then i can create collection successfully, but the solr collection is not accepting date data type and not sure the reason besides this issue.
Thanks in advance.
Regards,
GTA
... View more
- Tags:
- solr
Labels:
- Labels:
-
Apache Solr
03-15-2020
06:23 AM
Hi Friends,
Im trying to follow the cloudera get started tutorial and in that im following the below link
Exercise 3: Explore log events interactively
https://www.cloudera.com/developers/get-started-with-hadoop-tutorial/exercise-3.html
In that when im trying to create the collection using solr with the below command
solrctl --zk cloudera.myhost.com:2181/solr collection --create live_logs -s 1
Im getting the below error like
[root@cloudera flume]# solrctl --zk cloudera.myhost.local:2181/solr collection --create live_logs -s1 { "responseHeader":{ "status":0, "QTime":32369}, "failure":{ "cloudera.myhost.local:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error from server at http://cloudera.myhost.local:8983/solr: Error CREATEing SolrCore 'live_logs_shard1_replica_n1': Unable to create core [live_logs_shard1_replica_n1] Caused by: solr.IntField"}} [root@cloudera flume]#
org.apache.solr.common.SolrException: Error CREATEing SolrCore 'live_logs_shard1_replica_n1': Unable to create core [live_logs_shard1_replica_n1] Caused by: solr.IntField
Not sure, the reason besides this issue. Kindly help me with a fix guys.
Regards,
GTA
... View more
Labels:
- Labels:
-
Apache Solr
03-03-2020
01:50 AM
Hi Steve, Thanks for the update and referring the related post. Regards, GTA
... View more
03-03-2020
01:17 AM
Hi Steve, Thanks for the reply. So the quickstart VM is not longer available for the download rite?. Also without quickstart VM we cannot practice those cloudera tutorials with samples rite? Regards, GTA
... View more
03-02-2020
11:06 PM
Hi friends,
Im new to cloudera. Have just installed Cloudera enterprise version in my seperate server. Im practising the cloudera tutorials. On following the
Exercise 2: Correlate structured data with unstructured data
They are asking to move the log file from this location to HDFS
/opt/examples/log_files/access.log.2
But im not able to find the examples folder under the opt top. Not sure why the samples are not available. Kindly let me know how to install the examples alone seperately.
Thanks in advance.
Regards,
GTA
... View more
Labels:
- Labels:
-
Cloudera Manager
-
HDFS