Member since
02-08-2016
33
Posts
19
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
761 | 07-12-2016 08:35 PM | |
613 | 06-27-2016 02:35 PM | |
1097 | 06-01-2016 08:06 PM |
11-14-2017
09:44 PM
Below are the custom properties which would go in hand with H2O Sparkling Water . Use these properties to modify H2O Cluster Nodes, Memory, Cores etc.
... View more
09-25-2017
05:31 PM
Short Description: Configure Knox to access Atlas UI Article Here is the steps to access Atlas UI through Knox. 1. Make sure Knox is configured properly and it works fine. 2. ssh to Knox gateway host and go to /var/lib/knox/data-2.6.****/services 3. mkdir –p atlas/0.8.0/ mkdir –p atlas-api/0.8.0/ 4. download the configurations from https://github.com/apache/knox/tree/v0.13.0/gateway-service-definitions/src/main/resources/services/atlas-api/0.8.0 URL to /var/lib/knox/data-2.6.***/services/ atlas-api/0.8.0/ 5. download the configurations from https://github.com/apache/knox/tree/v0.13.0/gateway-service-definitions/src/main/resources/services/atlas/0.8.0 URL to /var/lib/knox/data-2.6.***/services/ atlas/0.8.0/ 5. change the owner/Group permissions to Knox for /var/lib/knox/data-2.6.**/services/atlas*/ and subdirectory 6. Go to Knox configurations Modify "Advanced topology" with below service tag
<service> <role>ATLAS</role> <url>sandbox.hortonworks.com:21000</url> </service> <service> 7. Restart Knox service. 8. You should be able to access Atlas UI from the below URL https://sandbox.hortonworks.com:8443/gateway/default/atlas/ Please Note: At this point of time, it's a work-around, Hortonworks doesn't support this yet.
... View more
- Find more articles tagged with:
- Atlas
- FAQ
- Governance & Lifecycle
- Knox
Labels:
09-11-2017
10:15 PM
Cluster is kerberized with knox installed. I am trying to access hive using beeline without knox works fine but with knox URL it gives below error. INFO hadoop.gateway (KnoxLdapRealm.java:getUserDn(724)) - Computed userDn: CN=xxxxx,OU=xxxx,DC=xxx,DC=xxxx,DC=xx using ldapSearch for principal: user1 INFO hadoop.gateway (KnoxLdapRealm.java:doGetAuthenticationInfo(203)) - Could not login: org.apache.shiro.authc.UsernamePasswordToken - user1, rememberMe=false (1x.xx.xx.xxx) ERROR hadoop.gateway (KnoxLdapRealm.java:doGetAuthenticationInfo(205)) - Shiro unable to login: javax.naming.AuthenticationException: [LDAP: error code 49 - 80090308: LdapErr: DSID-0C09042F, comment: AcceptSecurityContext error, data 775, v2580^@]
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Knox
05-30-2017
08:48 PM
1 Kudo
Issue: On Adding Oozie service to cluster, sometimes its possible user could see an error with simple message Failed to create new war file as below and on top of it Oozie server wouldn't start.
INFO: Adding extension: /usr/hdp/current/oozie-server/libext/mysql-jdbc-driver.jar
Failed: creating new Oozie WAR Solution: Oozie deployment script is trying to unzip the existing Oozie war file and then zip it back up and place it in a sub directory of /tmp/... . This unzip and zip process doubles the storage space required for the Oozie war file.
Make sure there is enough /tmp storage for the above process to take place, logs wouldn't provide this info.
... View more
- Find more articles tagged with:
- Hadoop Core
- Issue Resolution
- Oozie
Labels:
05-23-2017
03:26 PM
Any pointers for deploying and reconfiguring the Spark work processes in GPUs.
... View more
Labels:
- Labels:
-
Apache Spark
05-23-2017
03:07 PM
Noticed scheduler option under each notebook in Zeppelin, was wondering if there is a scheduler overview page which shows all the scheduled notebooks and run job status.
... View more
Labels:
- Labels:
-
Apache Zeppelin
05-09-2017
09:17 PM
Resolution/Workaround:
- Clear any value assigned to the Hive Configuration Resources property in the PutHiveStreaming processor. (With no site.xml files provided, NiFi will use the site.xml files that are loaded in the classpath).
- To load the site.xml files (core-site.xml, hdfs-site.xml, and hive-site.xml) on NiFi's classpath, place them in NiFi's conf directory (for Ambari based installs that would be in /etc/nifi/conf)
- Restart NiFi.
... View more
07-12-2016
08:35 PM
Thanks all for your comments. Looks like earlier upgrade missed below step, ambari-server stop ambari-server upgradestack HDP-2.3
ambari-server start This was evident from ambari database tables, while finalizing earlier upgrade, it errored out.
... View more
07-10-2016
05:01 PM
yes, upgraded current prod cluster from Ambari 2.2.1.1 to 2.2.2 and we are trying to upgrade HDP from 2.3.4.7 to 2.4.2. Able to perform the same in pre-prod env without issues.
... View more
07-10-2016
09:04 AM
Registering the new HDP version during upgrade HDP from 2.3.4.7 to 2.4.2.0 is failing with below error message. An internal system exception occurred: Stack HDP-2.4 doesn't have upgrade packages [qtp-ambari-client-33] BaseManagementHandler:57 - Caught a system exception while attempting to create a resource: An internal system exception occurred: Stack HDP-2.4 doesn't have upgrade packages
org.apache.ambari.server.controller.spi.SystemException: An internal system exception occurred: Stack HDP-2.4 doesn't have upgrade packages
at org.apache.ambari.server.controller.internal.AbstractResourceProvider.createResources(AbstractResourceProvider.java:282)
at org.apache.ambari.server.controller.internal.RepositoryVersionResourceProvider.createResources(RepositoryVersionResourceProvider.java:153)
at org.apache.ambari.server.controller.internal.ClusterControllerImpl.createResources(ClusterControllerImpl.java:289)
at org.apache.ambari.server.api.services.persistence.PersistenceManagerImpl.create(PersistenceManagerImpl.java:76)
at org.apache.ambari.server.api.handlers.CreateHandler.persist(CreateHandler.java:36)
at org.apache.ambari.server.api.handlers.BaseManagementHandler.handleRequest(BaseManagementHandler.java:72)
at org.apache.ambari.server.api.services.BaseRequest.process(BaseRequest.java:135)
at org.apache.ambari.server.api.services.BaseService.handleRequest(BaseService.java:106)
at org.apache.ambari.server.api.services.BaseService.handleRequest(BaseService.java:75)
at org.apache.ambari.server.api.services.RepositoryVersionService.createRepositoryVersion(RepositoryVersionService.java:98)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
... View more
Labels:
- Labels:
-
Apache Ambari
06-27-2016
02:35 PM
Since storm was never used in the cluster, customer didn't want to spend time in research, service was removed.
... View more
06-06-2016
06:40 PM
please refer http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.0/bk_ambari_views_guide/content/_reverse_proxy_views.html
... View more
06-01-2016
08:06 PM
3 Kudos
solved the issue by updating mapred.admin.user.env , since the cluster was upgraded from HDP 2.1 to HDP 2.3
... View more
06-01-2016
01:49 AM
i did check, they do exists 16/05/31 18:39:50 INFO bzip2.Bzip2Factory: Successfully loaded & initialized native-bzip2 library system-native
16/05/31 18:39:50 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
Native library checking:
hadoop: true /usr/hdp/2.3.4.7-4/hadoop/lib/native/libhadoop.so.1.0.0
zlib: true /lib64/libz.so.1
snappy: true /usr/hdp/2.3.4.7-4/hadoop/lib/native/libsnappy.so.1
lz4: true revision:99
bzip2: true /lib64/libbz2.so.1
... View more
06-01-2016
12:40 AM
While running select count(*) from lte where date_id = '20160524'; experiencing below error where as select without where clause works fine. Caused by: java.io.IOException: Unable to get CompressorType for codec (org.apache.hadoop.io.compress.SnappyCodec). This is most likely due to missing native libraries for the codec.
at org.apache.tez.runtime.library.common.sort.impl.ExternalSorter.<init>(ExternalSorter.java:217)
at org.apache.tez.runtime.library.common.sort.impl.PipelinedSorter.<init>(PipelinedSorter.java:121)
at org.apache.tez.runtime.library.common.sort.impl.PipelinedSorter.<init>(PipelinedSorter.java:116)
at org.apache.tez.runtime.library.output.OrderedPartitionedKVOutput.start(OrderedPartitionedKVOutput.java:142)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:142)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:149)
... 14 more
Caused by: java.lang.RuntimeException: native snappy library not available: this version of libhadoop was built without snappy support.
at org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:65)
at org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:134)
at org.apache.tez.runtime.library.common.sort.impl.ExternalSorter.<init>(ExternalSorter.java:208)
... 19 more
]], Vertex did not succeed due to OWN_TASK_FAILURE, failedTasks:1 killedTasks:9, Vertex vertex_1464716100177_0034_1_00 [Map 1] killed/failed due to:OWN_TASK_FAILURE]Vertex killed, vertexName=Reducer 2, vertexId=vertex_1464716100177_0034_1_01, diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not succeed due to OTHER_VERTEX_FAILURE, failedTasks:0 killedTasks:1, Vertex vertex_1464716100177_0034_1_01 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE]DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:1
... View more
Labels:
- Labels:
-
Apache Hive
03-17-2016
06:07 PM
1 Kudo
After upgrading Ambari to version 2.1.1, encountering service check failed for Storm. 2016-03-16 10:55:48 b.s.d.nimbus [INFO] Received topology submission for WordCountid1e5a2fd5_date551616 with conf {"storm.id" "WordCountid1e5a2fd5_date551616-2-1458140148", "nimbus.host" "somehost.com", "topology.users" (), "topology.acker.executors" nil, "topology.kryo.decorators" (), "topology.name" "WordCountid1e5a2fd5_date551616", "topology.submitter.principal" "", "topology.submitter.user" "", "topology.debug" true, "topology.kryo.register" nil, "topology.workers" 3, "storm.zookeeper.superACL" nil, "topology.max.task.parallelism" nil}
2016-03-16 10:55:48 b.s.d.nimbus [WARN] Topology submission exception. (topology name='WordCountid1e5a2fd5_date551616') #<RuntimeException java.lang.RuntimeException: org.apache.storm.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth for /credentials/WordCountid1e5a2fd5_date551616-2-1458140148>
2016-03-16 10:55:48 o.a.t.s.TNonblockingServer [ERROR] Unexpected exception while invoking!
java.lang.RuntimeException: org.apache.storm.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth for /credentials/WordCountid1e5a2fd5_date551616-2-1458140148
at backtype.storm.util$wrap_in_runtime.invoke(util.clj:47) ~[storm-core-0.9.3.2.2.6.0-2800.jar:0.9.3.2.2.6.0-2800]
at backtype.storm.zookeeper$create_node.invoke(zookeeper.clj:92) ~[storm-core-0.9.3.2.2.6.0-2800.jar:0.9.3.2.2.6.0-2800]
at backtype.storm.cluster$mk_distributed_cluster_state$reify__2327.set_data(cluster.clj:104) ~[storm-core-0.9.3.2.2.6.0-2800.jar:0.9.3.2.2.6.0-2800]
at backtype.storm.cluster$mk_storm_cluster_state$reify__2822.set_credentials_BANG_(cluster.clj:422) ~[storm-core-0.9.3.2.2.6.0-2800.jar:0.9.3.2.2.6.0-2800]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.7.0_45]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[na:1.7.0_45]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.7.0_45]
at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_45]
... View more
Labels:
- Labels:
-
Apache Storm
03-04-2016
12:08 AM
1 Kudo
Does it mean if the cluster is kerberized, we don't need Knox ? Only Ranger installation is enough.
... View more
03-03-2016
05:31 PM
2 Kudos
My client has the similar issue which fails while the job being submitted using capacity scheduler, where it works fine with fair scheduler.
... View more
03-02-2016
08:17 PM
1 Kudo
To configure this scenario, schedule-based policies are used. This is an alpha Apache feature, but i was not able to find any documentation associated . Can you please someone share if they had any.
... View more
03-02-2016
06:29 PM
Thanks, that helps. At the same time, can you point me to Setting Up Time-Based Queue Capacity Change.
... View more
03-02-2016
05:34 PM
1 Kudo
How can we configure cluster to have spark separated from other echo system components
... View more
Labels:
- Labels:
-
Apache Spark