Member since
08-18-2016
53
Posts
3
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4333 | 01-17-2017 06:19 AM |
08-12-2020
12:37 AM
What mount points we need to create before Cloudera installation?
... View more
Labels:
02-28-2019
02:55 AM
All three namenode directories are pointing to same metadata. /hadoop/hdfs/namenode /var/hadoop/hdfs/namenode /var/log/hadoop/hdfs/namenode What is the use of having 3 directories pointing to same data?
... View more
Labels:
- Labels:
-
Apache Hadoop
02-28-2019
02:47 AM
We have 3 data nodes in a production cluster and cluster capacity is 3 TB. whenever i tried to run a terasort job for 500 GB data, one of the node manager stopped working.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache YARN
09-05-2018
02:55 AM
How it is fixed?
... View more
09-04-2018
08:54 AM
@Geoffrey Shelton Okot please help here.
... View more
09-04-2018
05:46 AM
Our jobs are failing in kafka with error "java.lang.AssertionError: assertion failed: Beginning offset 511 is after the ending offset 510 for topic ps-control partition 6. You either provided an invalid fromOffset, or the Kafka topic has been damaged"
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Kafka
-
Apache Spark
04-03-2017
01:10 PM
You need to start zookeeper server in order to make ZKFailover controller up.
ZKFailover controller is the one who manages the active and standby state of namenode.
... View more
02-22-2017
02:40 PM
$ grep 'Listen' grafana.log
2017/02/22 07:14:40 [I] Listen: http://0.0.0.0:3000
2017/02/22 09:29:05 [I] Listen: http://0.0.0.0:3000 2017/02/22 09:31:09 [I] Listen: http://0.0.0.0:3000 $ grep 'Listen' grafana.out
2017/02/22 09:31:09 [I] Listen: http://0.0.0.0:3000
... View more
02-22-2017
01:52 PM
I have one more cluster and there grafana is working fine. I thing i observe there is: https://<hostname>:8443 --- my ambari address
And when i click on Grafana UI link its: http://<hostname>:3000 so the difference here is grafana is going to http here not on https.
can you tell me how can we resolve this?
... View more
02-22-2017
12:41 PM
one thing i want to clear here is: https://<localhost>:8443 --- my ambari address
And when i click on Grafana UI link its: https://<localhost>:3000
... View more
02-22-2017
12:38 PM
[server] # Protocol (http or https) ;protocol = http protocol = http # The ip address to bind to, empty will bind to all interfaces ;http_addr = # The http port to use ;http_port = 3000
http_port = 3000 # The public facing domain name used to access grafana from a browser ;domain = localhost
... View more
02-22-2017
12:21 PM
1 Kudo
when i click on grafana UI then it will take me to the a different page and there its giving This site can’t provide a secure connection <hostname> sent an invalid response. ERR_SSL_PROTOCOL_ERROR
... View more
Labels:
- Labels:
-
Apache Ambari
01-23-2017
11:40 AM
@Garima Verma Can you please upload the log file of this job?
... View more
01-18-2017
07:06 AM
@Aditya Mamidala
Which scheduler is configured in your cluster? Which queues you are using to run the jobs?
... View more
01-17-2017
06:19 AM
It working now with 'PARQUET.COMPRESSION'='SNAPPY'
... View more
01-13-2017
11:26 AM
I was creating one table in hive using beeline in which i need to compress my data using PARQUET file format.
so i try to use set parquet.compression=SNAPPY; But while executing this command i am getting one error as : Error: Error while processing statement: Cannot modify parquet.compression at runtime. It is not in list of params that are allowed to be modified at runtime (state=42000,code=1)
I checked and this property is not present in whitelist of params and we dont have permissions to edit the whitelist. so i got one resolution as instead of using set parquet.compression=SNAPPY; at runtime I used the table properties TBLPROPERTIES ('PARQUET.COMPRESS'='SNAPPY') and then it works the table is successfully created. But when i loaded the data to table and by using describe table i compare the data with my other table in which i did not used the compression, the size of data is same. so that means by using 'PARQUET.COMPRESS'='SNAPPY' compression is not happening. Is there any other property which we need to set to get the compression done. For Avro i have seen the below two properties to be set to do the compression hive> set hive.exec.compress.output=true; hive> set avro.output.codec=snappy; Likewise do i need to set some other property for parquet file?
... View more
Labels:
- Labels:
-
Apache Hive
12-22-2016
06:24 AM
I have one application id which i got from Resource manager and since this job ran by hive as user so i am not able to find out exactly who ran that job.
So i have application ID and now i need to know the code or query used by that particular application id.
Is there a way to get any kind of logs from which i can get the code or query used by that application ID
... View more
Labels:
- Labels:
-
Apache Hive
-
Cloudera Manager
12-19-2016
02:30 PM
@Rajesh Balamohan
The thing is nothing is changed in code or in the volume of data.
There is no change made on cluster in terms of configuration or installation of new things.
And since there is no change made than how suddenly the job is getting slow?
... View more
12-15-2016
10:02 AM
@Sindhu
Please find the full logs from Resource Manager UI.
rm-ui-logs.txt
... View more
12-15-2016
08:15 AM
1 Kudo
The hive query which is used by my batch is taking too much time to run.
Earlier when i fire the same query it took around 5 minutes and now it is taking around 22 minutes.
I cant change the query.
Please suggest the correct way to investigate this issue or kindly suggest any resolution.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
12-13-2016
07:29 AM
Unfortunately we dont have HUE in our cluster.
Also locate command is not working in my client's env.
I will try this script, hoping it will help.
... View more
12-13-2016
04:50 AM
Like Example.jar is there any sample pig scripts available with HDP?
... View more
Labels:
12-12-2016
08:59 AM
All our jobs are failing and giving only one error : 16/12/12 01:55:49 INFO impl.YarnClientImpl: Application submission is not
finished, submitted application application_1481368045892_0922 is still in
NEW
16/12/12 01:55:51 INFO impl.YarnClientImpl: Application submission is not
finished, submitted application application_1481368045892_0922 is still in
NEW 16/12/12 01:55:53 INFO impl.YarnClientImpl: Application submission is not
finished, submitted application application_1481368045892_0922 is still in NEW When i checked the resource manager UI, i clicked on the logs and it shows as: java.lang.Exception: Container is not yet running. Current state is NEW We have recently done the HDP upgrade from 2.3.2.0 to 2.4.2.0
... View more
Labels:
11-14-2016
09:50 AM
@SBandaru
I have performed the above steps but after sometime these alerts are coming again and again.
So can you please suggest something to fix this issue permanentaly.
... View more
11-14-2016
09:47 AM
@vpoornalingam
I have checked the above two values:
value of dfs.namenode.checkpoint.period is set to 6 hours.
Does this creates the above mentioned alerts?
... View more
10-19-2016
08:01 AM
After manually regenerating kerberos keytabs ambari is giving the below error in hive:
Connection failed on host <Hiveserver2 hostname>:10000 (Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/alerts/alert_hive_thrift_port.py", line 200, in execute
check_command_timeout=int(check_command_timeout))
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/hive_check.py", line 64, in check_thrift_port_sasl
Execute(kinitcmd, user=smokeuser)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 238, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
raise Fail(err_msg)
Fail: Execution of '/usr/bin/kinit -kt /dsap/etc/security/keytabs/smokeuser.headless.keytab ambari-qa@EXAMPLE.COM; ' returned 1. kinit: Password incorrect while getting initial credentials
)
... View more
Labels:
- Labels:
-
Apache Hive
10-18-2016
02:19 PM
I did, but not working
... View more
10-17-2016
08:15 AM
@Jonas Straub Could you please tell how and where to check all the kerberos configurations are removed or not?
Also we are getting a different kind of logs now.
PFA the same.log.txt
... View more