Member since
06-27-2017
30
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2765 | 11-09-2017 03:01 PM |
01-30-2018
11:03 AM
Thanks @Karthik Palanisamy for sharing this article. I have done the same activity without initializing shared edits or stopping the entire HDFS service. Below are the steps which I have followed. Can you please elaborate why we need to re-initialize the shared edits and correct me if the steps I have followed are not appropriate one. 1. Change the settings in Ambari for journal node edits directory(dfs.journalnode.edits.dir) from /hadoop/hdfs/journal/ to /data/1/journal/
2. Don't restart any services immediately.
3. Stop the journal node on NODE1.
a. SSH to NODE1
b. sudo mkdir -p /data/1/journal
c. sudo chown hdfs:hadoop /data/1/journal
d. sudo rsync -ahvAX /hadoop/hdfs/journal/* /data/1/journal/
Start the journal node on NODE1
4. Repeat step 3 for remaining two journal nodes NODE2 and NODE3.
5. Restart the required services accordingly (Rolling or All at once)
... View more
11-09-2017
03:02 PM
Thanks @bkosaraju for the response and details.
... View more
11-09-2017
03:01 PM
Apologies @nkumar for the delay in response. The issue is related to Ambari which behaves differently after disabling and re-enabling the kerberos. Issue got fixed after making changes to ambari with the help of Hortonworks Support using below REST calls. curl -u test:test -H "X-Requested-By: ambari" -X POST http://ambari-server:8080/api/v1/clusters/MyClusterName/services/KERBEROS curl -u test:test -H "X-Requested-By: ambari" -X POST http://ambari-server:8080/api/v1/clusters/MyClusterName/services/KERBEROS/components/KERBEROS_CLIENT curl -s -u test:test http://ambari-server:8080/api/v1/hosts|grep host_name| sed -n 's/.*"host_name" : "\([^\"]*\)".*/\1/p'>hostcluster.txt for i in `cat hostcluster.txt`; do curl -u test:test -H "X-Requested-By: ambari" -X POST http://ambari-server:8080/api/v1/clusters/MyClusterName/hosts/$i/host_components/KERBEROS_CLIENT; done curl -u test:test -H 'X-Requested-By: ambari' -X PUT -d '{"HostRoles": {"state":"INSTALLED"}}' http://ambari-server:8080/api/v1/clusters/MyClusterName/host_components?HostRoles/state=INIT curl -H "X-Requested-By:ambari" -u test:test -i -X PUT -d @./payload.json http://ambari-server:8080/api/v1/clusters/MyClusterName
... View more
11-09-2017
02:55 PM
Update ! We created our own jar file for search_replace from apache flume 1.6 code with minor changes. Now its working as expected.
... View more
10-19-2017
08:17 AM
Hi @dthakkar, Im using below piece of config which works flawlessly in apache flume 1.7 but HDP flume throws error saying class not found for search_replace. Even HDP Flume documentation doesn't mentions search_replace https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/ds_flume/FlumeUserGuide.html mytest.sources.src.interceptors = s0 s1 s2 s3
mytest.sources.src.interceptors.s0.type = host
mytest.sources.src.interceptors.so.useIP = false
mytest.sources.src.interceptors.s1.type = search_replace
mytest.sources.src.interceptors.s1.searchPattern = ^\\{
mytest.sources.src.interceptors.s1.replaceString = \\"
mytest.sources.src.interceptors.s2.type = search_replace
mytest.sources.src.interceptors.s2.searchPattern = \\}
mytest.sources.src.interceptors.s2.replaceString =
mytest.sources.src.interceptors.s3.type = search_replace
mytest.sources.src.interceptors.s3.searchPattern = \\,\\{
<br>
... View more
10-18-2017
02:00 PM
Hi All, Looks like search_replace interceptor is missing in HDP 2.6. Can anyone please advise an alternate method to search_replace interceptor in flume ?
... View more
Labels:
10-06-2017
11:49 AM
Hi @Geoffrey Shelton Okot, I have checked the content and everything looks good and we are using same krb config files across different clusters. I dont see any discrepencies with the kerberos.
... View more
10-03-2017
12:42 PM
Thanks @Geoffrey Shelton Okot for the quick response. Ambari is running as an user which has got sudo privileges. And auto-start services is enabled but only metrics-collector is enabled. In the KDC i can see that it has created corresponding principals associated with the service and hostnames. Only issue i have observed is it stopped creating keytab files and distribute it to the designated system which ambari reported successful. I have carried out this activity some 9-10 times but all the time its ending up without creating keytab files.
... View more
10-03-2017
12:33 PM
Yes I have regenerated but it didn't help. What i have seen is it didn't create new keytabs.
... View more
10-03-2017
10:08 AM
Hi All, I'm facing an issue while installing a new component to already kerberized cluster. The installation happens successfully without any issues but services do not start due to unavailability of keytab file on that host where new component is installed. After the installation I validated that new keytab files are not created in the designated location but ambari says it has created the keytabs and distributed to that host. Ambari : 2.5.1 HDP : 2.6.1
... View more
Labels:
- Labels:
-
Apache Ambari
09-28-2017
08:37 AM
Thanks @bkosaraju for the quick response. Somehow it didn't help me. Here is what i have done. 1. Remove LDAP users using below command curl --insecure -u admin:$PASSWORD -H 'X-Requested-By: ambari' -X DELETE http://$AMBARI_HOST:8080/api/v1/users/myUser 2. Update /etc/amabari-server/conf/ambari.properties with below parameter authentication.ldap.username.forceLowercase=false 3. Restart Ambari server. 4. Sync single ldap user using below command ambari-server sync-ldap --users users Still I'm getting same results, its creating lowercase account and lowercase HDFS user directories. Am I missing anything ? HDP : 2.6.1.0 Ambari : 2.5.1.0
... View more
09-27-2017
05:03 PM
Hi All, I have setup ldap/AD and ambari-server integration which is working fine but only concern I have is ambari is creating case-sensitive AD users to lowercase users. Since this is not suitable for us, I would like to revert the changes which have been made after running ambari-server setup-ldap so that I can no longer allow LDAP users to login to ambari. Has anyone faced this and able to resolve ? Thanks.
... View more
Labels:
- Labels:
-
Apache Ambari
09-26-2017
11:12 AM
Hi All, When I sync LDAP/AD with Ambari, users are being created in lowercase in Ambari whereas our users are case-sensitive. Since users are created in lowercase, HDFS home directories are being created in lowercase. But users are connecting to gateway boxes using the case-sensitive UserIDs. Has anyone faced this issue and resolved ? Please provide your valuable inputs to overcome the issue being faced. Thanks,
... View more
Labels:
- Labels:
-
Apache Ambari
09-25-2017
09:11 AM
Thanks @Xiaoyu Yao for the details and documentation link.
... View more
09-19-2017
11:54 AM
Hi All, Is there a way to only make AD username as the owner of HDFS files/directories instead of userName@domainName. Current Scenario : hdfs dfs -ls /testfile
-rw-r--r-- 3 dgiri@mytestdomain.com hdfs 0 2017-06-30 10:06 /testfile I want this to be like below. hdfs dfs -ls /testfile
-rw-r--r-- 3 dgiri hdfs 0 2017-06-30 10:06 /testfile
Please note that cluster is AD integrated and everything is working fine as expected except the files/directories ownership. Also, I'm getting below INFO message whenever I run hdfs commands. Is there a way to stop displaying below message ? 17/09/19 12:50:13 INFO util.KerberosName: No auth_to_local rules applied to dgiri@mytestdomain.com Any help is much appreciated. Thanks.
... View more
Labels:
- Labels:
-
Apache Hadoop
09-15-2017
01:17 PM
Thanks @Jay SenSharma for the quick response. As I mentioned earlier it goes down only when flume agents are running else it just works fine. Ambari metrics verion : 0.1.0 The /var/log/ambari-metrics-collector/ambari-metrics-collector.out file has only this log. Sep 15, 2017 11:59:51 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
INFO: Registering org.apache.hadoop.yarn.webapp.YarnJacksonJaxbJsonProvider as a provider class
Sep 15, 2017 11:59:51 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
INFO: Registering org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.AHSWebServices as a root resource class
Sep 15, 2017 11:59:51 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
INFO: Registering org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.TimelineWebServices as a root resource class
Sep 15, 2017 11:59:51 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
INFO: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class
Sep 15, 2017 11:59:51 AM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate
INFO: Initiating Jersey application, version 'Jersey: 1.11 12/09/2011 10:27 AM'
Sep 15, 2017 11:59:52 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
INFO: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceManagedComponentProvider with the scope "Singleton"
Sep 15, 2017 11:59:52 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
INFO: Binding org.apache.hadoop.yarn.webapp.YarnJacksonJaxbJsonProvider to GuiceManagedComponentProvider with the scope "Singleton"
Sep 15, 2017 11:59:52 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
INFO: Binding org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.AHSWebServices to GuiceManagedComponentProvider with the scope "Singleton"
Sep 15, 2017 11:59:52 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
INFO: Binding org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.TimelineWebServices to GuiceManagedComponentProvider with the scope "Singleton"
java.lang.NumberFormatException
at java.math.BigDecimal.<init>(BigDecimal.java:494)
at java.math.BigDecimal.<init>(BigDecimal.java:383)
at java.math.BigDecimal.<init>(BigDecimal.java:806)
at java.math.BigDecimal.valueOf(BigDecimal.java:1274)
at org.apache.phoenix.schema.types.PDouble.getMaxLength(PDouble.java:70)
at org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:210)
at org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:173)
at org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:160)
at org.apache.phoenix.compile.UpsertCompiler$UpsertValuesCompiler.visit(UpsertCompiler.java:1016)
at org.apache.phoenix.compile.UpsertCompiler$UpsertValuesCompiler.visit(UpsertCompiler.java:1000)
at org.apache.phoenix.parse.BindParseNode.accept(BindParseNode.java:47)
at org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:856)
at org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:593)
at org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:581)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:336)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:331)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:329)
at org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:199)
at org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor.commitMetrics(PhoenixHBaseAccessor.java:310)
at org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.PhoenixHBaseAccessor.commitMetricsFromCache(PhoenixHBaseAccessor.java:257)
at org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.MetricsCacheCommitterThread.run(MetricsCacheCommitterThread.java:35)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
... View more
09-15-2017
09:56 AM
Hi All, I have below scenario with ambari metrics collector on 27 node HDP 2.6.1 cluster. 1. Flume agents(7) are stopped -- Ambari metrics collector works without any issues and displays metrics in Ambari webUI. 2. Flume agents(7) are running -- Ambari metrics collector stops on its own without giving any error in the logs. Not sure how running flume agents stops the ambari metrics collector, but I can see below exception in /var/log/ambari-metrics-collector/ambari-metrics-collector.out INFO: Binding org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.TimelineWebServices to GuiceManagedComponentProvider with the scope "Singleton"
java.lang.NumberFormatException
at java.math.BigDecimal.<init>(BigDecimal.java:494)
at java.math.BigDecimal.<init>(BigDecimal.java:383)
at java.math.BigDecimal.<init>(BigDecimal.java:806)
at java.math.BigDecimal.valueOf(BigDecimal.java:1274)
at org.apache.phoenix.schema.types.PDouble.getMaxLength(PDouble.java:70)
at org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:210)
at org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:173)
at org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:160)
Also some logs from /var/log/ambari-metrics-collector/gc.log-201709151029 017-09-15T10:31:56.937+0100: 138.481: [CMS-concurrent-reset-start]
2017-09-15T10:31:56.941+0100: 138.486: [CMS-concurrent-reset: 0.005/0.005 secs] [Times: user=0.02 sys=0.00, real=0.00 secs]
2017-09-15T10:34:03.552+0100: 265.097: [GC (Allocation Failure) 2017-09-15T10:34:03.552+0100: 265.097: [ParNew: 235968K->26176K(235968K), 0.0243885 secs] 279609K->82718K(2070976K), 0.0245977 secs] [Times: user=0.10 sys=0.01, real=0.03 secs]
2017-09-15T10:37:12.462+0100: 454.006: [GC (Allocation Failure) 2017-09-15T10:37:12.462+0100: 454.007: [ParNew: 235968K->26176K(235968K), 0.0177335 secs] 292510K->89076K(2070976K), 0.0179993 secs] [Times: user=0.08 sys=0.00, real=0.02 secs]
2017-09-15T10:41:52.432+0100: 733.977: [GC (Allocation Failure) 2017-09-15T10:41:52.432+0100: 733.977: [ParNew: 235968K->20890K(235968K), 0.0307675 secs] 298868K->102991K(2070976K), 0.0309685 secs] [Times: user=0.11 sys=0.01, real=0.03 secs]
2017-09-15T10:46:52.433+0100: 1033.977: [GC (Allocation Failure) 2017-09-15T10:46:52.433+0100: 1033.977: [ParNew: 230682K->22481K(235968K), 0.0078365 secs] 312783K->104582K(2070976K), 0.0080006 secs] [Times: user=0.06 sys=0.00, real=0.01 secs]
2017-09-15T10:51:52.447+0100: 1333.992: [GC (Allocation Failure) 2017-09-15T10:51:52.447+0100: 1333.992: [ParNew: 232273K->26176K(235968K), 0.0093279 secs] 314374K->109959K(2070976K), 0.0094784 secs] [Times: user=0.07 sys=0.00, real=0.01 secs]
I have increased the heap space as well per HDP recommendations. Can anyone please help me to understand this scenario ? Thanks
... View more
Labels:
- Labels:
-
Apache Ambari
09-04-2017
04:53 PM
1 Kudo
@dthakkar agent1.channels.kafkaChannel.parseAsFlumeEvent =false above configuration made the difference. Thanks a lot.
... View more
09-04-2017
04:25 PM
Thanks @dthakkar for your valuable response. Will give it a try from my end.
... View more
08-24-2017
05:29 PM
Update 1: When using only Kafka as sink with other types of source and channel, I'm able to push messages to Kafka Topic. It fails when kafkaSink is used with KafkaSource or KafkaChannel . Following have been tested: 1. kafkaSource, kafkaChannel, other Sinks -- Works 2. kafkaSource, kafkaChannel, kafkaSink -- Fails 3. anySource(except kafka), anychannel(except kafka), kafkaSink -- Works 4. kafkaSource, memoryChannel, kafkaSink -- Fails 5. anySource, kafkaChannel, kafkaSink -- Fails Has anyone tried using both kafkaSource and kafkaSink in same flume agent ? Thanks.
... View more
08-24-2017
09:39 AM
Hi All,
I'm facing issues with flume agent when i configure sink as KafkaSink. The kafkasink does not writes any messages to Kafka topic and I dont see any errors in the flume agent logs.
HDP - 2.6.1 Flume source and channels are working as expected with File Roll Sink. Below is the flume sink configuration. testagent2.sinks.kafkasink2.channel = memchannel2
testagent2.sinks.kafkasink2.type = org.apache.flume.sink.kafka.KafkaSink
testagent2.sinks.kafkasink2.brokerList = brokerServer:portNo
testagent2.sinks.kafkasink2.zookeeperConnect = ZookeeperServer:portNo
testagent2.sinks.kafkasink2.topic = testtopic2
testagent2.sinks.kafkasink2.batchSize = 100
testagent2.sinks.kafkasink2.requiredAcks = 1
testagent2.sinks.kafkasink2.kafka.rebalance.max.retries = 40
testagent2.sinks.kafkasink2.kafka.rebalance.backoff.ms = 5000
testagent2.sinks.kafkasink2.kafka.zookeeper.session.timeout.ms = 6000
... View more
Labels:
07-18-2017
12:46 PM
Hi All, I'm getting the same issue after kerberizing the cluster though it's a newly built one. HDP-2.6.1.0 Ambari - 2.5.1.0
... View more
07-04-2017
09:53 AM
Below KB article helped me to resolve the issue. https://community.hortonworks.com/content/supportkb/49037/phoenix-sqlline-query-on-larger-data-set-fails-wit.html
... View more
06-30-2017
05:28 PM
Thanks for the info Sandeep
... View more
06-30-2017
04:31 PM
This issue got resolved for me after installing tez_client, hive_client and hcat on hive metastore servers.
... View more
06-29-2017
03:05 PM
Thanks Jay SenSharma for sharing the link.
... View more
06-27-2017
12:08 PM
I'm also getting the same error in hive metastore alerts. Has anyone faced this issue and fixed ?
... View more