Member since
03-01-2016
97
Posts
21
Kudos Received
0
Solutions
07-02-2018
06:11 AM
user-sync-issue.png
... View more
06-25-2018
10:28 AM
I had integrated hue and ldap and when i'm trying to login ldap user I always have this message : "Cannot make home directory for user Kishore.Sanchian." how to resolve this issue???
... View more
05-21-2018
08:05 AM
Hi Team, What is the use of minimum and maximum container size. if i put the my minimum container size is 2gb and maximum container size is 8 gb. in our cluster we run the different types of jobs. one job can consume the 5 gb memory, another will consume 8gb, this 2 jobs are run successfully because my max container size is 8 gb. in case if i run the jobs using the memory 10 gb, 15 gb, and 20gb . will it run successfully in my cluster using above min and mx container size. Single job can use single container or multiple containers? Can anyone of you help me on this.
... View more
Labels:
05-21-2018
05:26 AM
Thank you so much for the help
... View more
05-18-2018
08:27 AM
How to check each kafka topic retention period below command displaying only topic details. /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --zookeeper dblnetworl.sap.com:2181 --describe --topic network-files output: Topic:
network-files Partition:
0 Leader: 1005 Replicas:
1005,1006,1001 Isr: 1005,1006,1001 Topic:
network-files Partition:
1 Leader: 1006 Replicas:
1006,1001,1002 Isr: 1006,1002,1001
... View more
Labels:
05-11-2018
11:56 AM
if I change Kerberos principal/gateway user what steps needs
to be taken careof.
... View more
Labels:
02-23-2018
06:24 AM
we have recently kerberized our PROD kafka cluster and we are not able to post data to kafka from nodejs due to this.
is there document or steps available in order to integrate nodejs with kafka?
... View more
02-22-2018
12:37 PM
solr-ui-error.pngsolr ui not working ..kindly find the attached screen shot..
... View more
Labels:
02-16-2018
10:36 AM
steps to write data to kafka
topic using nifi, kafka cluster is kerborized. please find the attache error screen shot.
... View more
Labels:
02-14-2018
02:11 PM
I'm running spark jobs in server-1 and if i can run llap daemons on the same server is there any impact on running spark jobs.
... View more
Labels:
01-30-2018
04:38 PM
i know how to add services but how to add multiple zeppelin services to hdp cluster and how to rebalance to cluster work loads.
... View more
01-30-2018
09:37 AM
02-13-2017
12:53 PM
@Jay SenSharma thanks jay. now it is working fine and issue resolved.
... View more
02-13-2017
07:34 AM
@hduraiswamy i downloaded HDF form hortonworks portal and installed successfully. for nifi and ldap integration what are the config files i want to modify/change.
... View more
02-13-2017
07:22 AM
i followed the below steps to resolve issue. but i'm getting error. kindly provide the solution. Steps: 1)
find the alert REST url
curl -u
admin:$AMBARI_PASSWORD -H "X-Requested-by:ambari" -k -X GET http://$AMBARI_SERVER:8080/api/v1/clusters/$CLUSTER_NAME/alert_definitions?AlertDefiition/name=yarn_app_timeline_server_webui
you should get something like:
{
"href" : "http://$AMBARI_SERVER:8080/api/v1/clusters/$CLUSTER_NAME/alert_definitions?AlertDefinition/name=yarn_app_timeline_server_webui",
"items" : [
{
"href" :
"http://$AMBARI_SERVER:8080/api/v1/clusters/$CLUSTER_NAME/alert_definitions/33",
"AlertDefinition" : {
"cluster_name" :
"$CLUSTER_NAME",
"id" : 33,
"label" : "App
Timeline Web UI",
"name" :
"yarn_app_timeline_server_webui"
}
}
]
}
2) Get the actual alert definition
Take the inner "href" value (in red, above) and run a second GET
command to get the actual definition of the alert. Following the example
above (note that the "id", which is 33 in the example, could be
different in your case):
curl -u admin:$AMBARI_PASSWORD
-H "X-Requested-by:ambari" -k -X GET http://$AMBARI_SERVER:8080/api/v1/clusters/$CLUSTER_NAME/alert_definitions/33
> app_def.json
the content of output file app_def.json
should look similar to:
{
"href" : "http://$AMBARI_SERVER:8080/api/v1/clusters/$CLUSTER_NAME/alert_definitions/33",
"AlertDefinition" : {
"cluster_name" : "$CLUSTER_NAME",
"component_name" :
"APP_TIMELINE_SERVER",
"description" : "This host-level alert is
triggered if the App Timeline Server Web UI is unreachable.",
"enabled" : true,
"id" : 33,
"ignore_host" : false,
"interval" : 1,
"label" : "App Timeline Web UI",
"name" :
"yarn_app_timeline_server_webui",
"scope" : "ANY",
"service_name" : "YARN",
"source" : {
"reporting" : {
"critical" : {
"text" :
"Connection failed to {1} ({3})"
},
"ok" : {
"text" :
"HTTP {0} response in {2:.3f}s"
},
"warning" : {
"text" :
"HTTP {0} response from {1} in {2:.3f}s ({3})"
}
},
"type" : "WEB",
"uri" : {
"kerberos_principal" :
"{{yarn-site/yarn.timeline-service.http-authentication.kerberos.principal}}",
"connection_timeout"
: 5.0,
"kerberos_keytab" :
"{{yarn-site/yarn.timeline-service.http-authentication.kerberos.keytab}}",
"http" :
"{{yarn-site/yarn.timeline-service.webapp.address}}",
"https_property" :
"{{yarn-site/yarn.http.policy}}",
"https_property_value" :
"HTTPS_ONLY",
"default_port"
: 0.0,
"https" :
"{{yarn-site/yarn.timeline-service.webapp.https.address}}"
}
}
}
}
3) Change the timeout
value in the json file
Edit the file app_def.json
and to increase the timeout of the alert change the value (these are seconds)
from
"connection_timeout" : 5.0,
to, for example ,12.0:
"connection_timeout" : 12.0,
====
There is a second important change to do the the json file to workaround an
issue with amabari and is to also change:
"default_port" : 0.0,
to
"default_port" : 0, 4)
Send back the new definition to Ambari
Issue the command (as usual check that the url is the one found at point 1):
curl -u admin:$AMBARI_PASSWORD -H
"X-Requested-by:ambari" -k -X PUT -d @app_def.json http://$AMBARI_SERVER:8080/api/v1/clusters/$CLUSTER_NAME/alert_definitions/33 ERROR: { "status" : 400, "message" : "org.apache.ambari.server.controller.spi.UnsupportedPropertyException:
The properties [href] specified in the request or predicate are
not supported for the resource type AlertDefinition."
... View more
Labels:
02-11-2017
08:43 PM
Hi @pierre Bullard, I tried to integrate LDAP and nifi but I'm facing issue so that's why I posted. Thanks for ur update.
... View more
02-11-2017
12:48 PM
how to find long running hadoop/yarn jobs by using command line.
... View more
Labels:
02-11-2017
12:39 PM
kindly provide me the steps for nifi and ldap integration.
... View more
Labels:
01-11-2017
10:28 AM
log-file.txt hi vijayan please find the attached log
... View more
01-10-2017
10:45 AM
ambari console is slow and mertics not properly sending to ambari server. please explain how to resolve issue. Error:
2017-01-10 11:06:34,736 [WARNING] emitter.py:74 - Error sending metrics to
server. timed out
2017-01-10 11:06:34,737 [WARNING] emitter.py:80 - Retrying after 5 ...
2017-01-10 11:08:36,799 [WARNING] emitter.py:74 - Error sending metrics to
server. timed out
2017-01-10 11:08:36,799 [WARNING] emitter.py:80 - Retrying after 5 ...
2017-01-10 11:09:32,849 [WARNING] emitter.py:74 - Error sending metrics to
server. timed out
2017-01-10 11:09:32,849 [WARNING] emitter.py:80 - Retrying after 5 ...
2017-01-10 11:10:27,895 [WARNING] emitter.py:74 - Error sending metrics to
server. timed out
2017-01-10 11:10:27,896 [WARNING] emitter.py:80 - Retrying after 5 ...
2017-01-10 11:12:23,949 [WARNING] emitter.py:74 - Error sending metrics to
server. timed out
2017-01-10 11:12:23,949 [WARNING] emitter.py:80 - Retrying after 5 ...
2017-01-10 11:13:18,966 [WARNING] emitter.py:74 - Error sending metrics to
server. <urlopen error timed out>
2017-01-10 11:13:18,966 [WARNING] emitter.py:80 - Retrying after 5 ...
2017-01-10 11:14:14,026 [WARNING] emitter.py:74 - Error sending metrics to
server. timed out
2017-01-10 11:14:14,026 [WARNING] emitter.py:80 - Retrying after 5 ...
2017-01-10 11:16:09,081 [WARNING] emitter.py:74 - Error sending metrics to
server. timed out
2017-01-10 11:16:09,081 [WARNING] emitter.py:80 - Retrying after 5 ...
2017-01-10 11:17:04,132 [WARNING] emitter.py:74 - Error sending metrics to
server. timed out
2017-01-10 11:17:04,132 [WARNING] emitter.py:80 - Retrying after 5 ...
2017-01-10 11:17:59,183 [WARNING] emitter.py:74 - Error sending metrics to
server. timed out
2017-01-10 11:17:59,184 [WARNING] emitter.py:80 - Retrying after 5 ...
... View more
09-30-2016
05:21 PM
When I post simultaneous jobserver requests, they always seem to be processed in FIFO mode. This is despite my best efforts to enable the FAIR scheduler. How can I ensure that my requests are always processed in parallel? Background: On my cluster there is one SparkContext to which users can post requests to process data. Each request may act on a different chunk of data but the operations are always the same. A small one-minute job should not have to wait for a large one-hour job to finish. Intuitively I would expect the following to happen (see my configuration below): The context runs within a FAIR pool. Every time a user sends a request to process some data, Spark should split up the fair pool and give a fraction of the cluster resources to process that new request. Each request is then run in FIFO mode parallel to any other concurrent requests. Here's what actually happens when I run simultaneous jobs: The interface says "1 Fair Scheduler Pools" and it lists one active (FIFO) pool named "default." It seems that everything is executing within the same FIFO pool, which itself is running alone within the FAIR pool. I can see that my fair pool details are loaded correctly on Spark's Environment page, but my requests are all processed in FIFO fashion. How do I configure my environment/application so that every request actually runs in parallel to others? Do I need to create a separate context for each request? Do I create an arbitrary number of identical FIFO pools within my FAIR pool and then somehow pick an empty pool every time a request is made? Considering the objectives of Jobserver, it seems like this should all be automatic and not very complicated to set up.
... View more
Labels:
09-29-2016
07:25 AM
the scheduler is already set to FAIR, but I see that for
all apps submitted through Spark in Spark History Server, they’re using FIFO
instead. what is causing FIFO to be used or at least displayed when checking the
submitted apps.
... View more
Labels:
09-22-2016
05:44 PM
1 Kudo
Agent is unable to transmit messages between 2
hosts. please explain detailed steps for configure flume between two different hosts.
... View more
Labels:
09-14-2016
05:48 PM
1 Kudo
HTTP ERROR 403 Problem accessing /. Reason: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos credentails)
... View more
Labels:
09-12-2016
03:16 PM
img.pngwe are getting below error/exception while executing sqoop
action via oozie . Error - Failing Oozie Launcher, Main class
[org.apache.oozie.action.hadoop.SqoopMain], exception invoking main(),
java.lang.ClassNotFoundException: Class
org.apache.oozie.action.hadoop.SqoopMain not found It seems some jars related to sqoop action not available on
oozie share lib path i.e. “/user/oozie/share/lib/lib_20160807001458/sqoop”.
We are getting access issues while trying to update the oozie share lib
path with required jars.
... View more
Labels: