Member since
10-28-2024
6
Posts
7
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
345 | 11-07-2024 10:29 PM |
12-20-2024
05:13 AM
1 Kudo
Hi I have integrated Secured Nifi with NGINX and is integrated with Onelogin SAML 2.0 Custom Connector (Advanced) After entering credentials of one login it shows user logged in on onelogin portal, But on Browser it redirects to /nifi-api/access/saml/login/consumer this url and it shows HTTP ERROR 401 Unauthorized URI:STATUS:MESSAGE: /nifi-api/access/saml/login/consumer 401 Unauthorized Few findings are- Recipient value in SAML payload is empty and Cookie value is not matching with InResponseTo value in SAML Payload Also not sure how to match it from nifi.user.log file i can see error: SAML Authentication Request Identifier Cookie not found Can anyone please guide here?
... View more
Labels:
- Labels:
-
Apache NiFi
11-10-2024
11:22 PM
1 Kudo
Thanks yes it was version mismatch issue upgrading hive - 4.0 and tez 0.10.3 works forme
... View more
11-10-2024
11:15 PM
1 Kudo
Hi, I am trying to run simple query on Hive with execution engine as Tez, Here is the query Insert into table test values('a', 'b'); there are only two columns in table and both are string columns. If i insert into int column then it works fine. also if i insert through load data command, this also works fine with string command. Below is the error that i am getting in yarn logs: ERROR : Status: Failed ERROR : Vertex failed, vertexName=Map 1, vertexId=vertex_1731065811939_0001_1_00, diagnostics=[Vertex vertex_1731065811939_0001_1_00 [Map 1] killed/failed due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: _dummy_table initializer failed, vertex=vertex_1731065811939_0001_1_00 [Map 1], java.lang.RuntimeException: Failed to load plan: hdfs://localhost:9000/tmp/hive/hadoopuser/a86dedab-ce0f-4f06-8056-c37aef3cca55/hive_2024-11-11_12-26-32_369_2945486912443049901-2/hadoopuser/_tez_scratch_dir/8d991848-0c76-4ffa-a82e-76f0ca5140e5/map.xml at org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:525) at org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:369) at org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:480) at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:508) at org.apache.tez.mapreduce.hadoop.MRInputHelpers.generateOldSplits(MRInputHelpers.java:472) at org.apache.tez.mapreduce.hadoop.MRInputHelpers.generateInputSplitsToMem(MRInputHelpers.java:321) at org.apache.tez.mapreduce.common.MRInputAMSplitGenerator.initialize(MRInputAMSplitGenerator.java:121) at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281) at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899) at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272) at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256) at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131) at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:75) at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: Encountered unregistered class ID: 112 Serialization trace: conf (org.apache.hadoop.hive.ql.exec.TableScanOperator) aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork) at org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:159) at org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:758) at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readClass(SerializationUtilities.java:188) at org.apache.hive.com.esotericsoftware.kryo.serializers.ReflectField.read(ReflectField.java:117) at org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:129) at org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:877) at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readClassAndObject(SerializationUtilities.java:183) at org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:235) at org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:42) at org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:796) at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObject(SerializationUtilities.java:221) at org.apache.hive.com.esotericsoftware.kryo.serializers.ReflectField.read(ReflectField.java:124) at org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:129) at org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:774) at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObject(SerializationUtilities.java:213) at org.apache.hadoop.hive.ql.exec.SerializationUtilities.deserializeObjectByKryo(SerializationUtilities.java:838) at org.apache.hadoop.hive.ql.exec.SerializationUtilities.deserializePlan(SerializationUtilities.java:745) at org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:489) ... 19 more ] Can anyone guide here?
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Tez
-
Apache YARN
-
HDFS
-
MapReduce
11-07-2024
10:39 PM
1 Kudo
Hi I have hadoop cluster with version 3.4.0, hive version 3.1.3 and tez 0.9.2 When i try to insert data i am facing issue: insert into client_orc_update (client_name) values("hello"); Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: Encountered unregistered class ID: 95 Serialization trace: conf (org.apache.hadoop.hive.ql.exec.TableScanOperator) aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork) another error i am getting on insert command with overwrite when transfering data from one table to another. Caused by: java.util.concurrent.ExecutionException: java.lang.NoSuchMethodError: org.apache.hadoop.fs.FileStatus.compareTo(Lorg/apache/hadoop/fs/FileStatus;)I
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1790)
... 17 more
Caused by: java.lang.NoSuchMethodError: org.apache.hadoop.fs.FileStatus.compareTo(Lorg/apache/hadoop/fs/FileStatus;)I When run the above queries on MR it works. but if change the execution engine to tez, it throws error; Can anyone help here? Thanks
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Tez
-
Apache YARN
-
HDFS
-
MapReduce
11-07-2024
10:29 PM
2 Kudos
Thanks for suggestion, the issue has been resolve, we had aaded new datanode after that we had restarted the namenode, resourcemanager, datanode, node manager, but not hiveserver, because of which configuration was not loaded on hive properly, after restart it started working.
... View more
11-06-2024
11:21 PM
1 Kudo
Hi I have hadoop cluster with namenode resourcemanager on on server, datanode o another server and hive, tez on different server. I am getting error on running query on beeline - below are the yarn logs - it keeps trying to connect 2024-10-31 15:57:49,806 [INFO] [ServiceThread:org.apache.tez.dag.app.rm.TaskSchedulerManager] |rm.TaskSchedulerManager|: Creating TaskScheduler: Local TaskScheduler with clusterIdentifier=111101111
2024-10-31 15:57:49,813 [INFO] [ServiceThread:org.apache.tez.dag.app.rm.TaskSchedulerManager] |rm.YarnTaskSchedulerService|: YarnTaskScheduler initialized with configuration: maxRMHeartbeatInterval: 1000, containerReuseEnabled: true, reuseRackLocal: true, reuseNonLocal: false, localitySchedulingDelay: 250, preemptionPercentage: 10, preemptionMaxWaitTime: 60000, numHeartbeatsBetweenPreemptions: 3, idleContainerMinTimeout: 5000, idleContainerMaxTimeout: 10000, sessionMinHeldContainers: 0
2024-10-31 15:57:49,817 [INFO] [ServiceThread:org.apache.tez.dag.app.rm.TaskSchedulerManager] |client.RMProxy|: Connecting to ResourceManager at /0.0.0.0:8030
2024-10-31 15:57:50,834 [INFO] [ServiceThread:org.apache.tez.dag.app.rm.TaskSchedulerManager] |ipc.Client|: Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2024-10-31 15:57:51,836 [INFO] [ServiceThread:org.apache.tez.dag.app.rm.TaskSchedulerManager] |ipc.Client|: Retrying connect to server: 0.0.0.0/0.0.0.0:8030. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2024-10-31 15:57:52,837 [INFO] [ServiceThread:org.apache.tez.dag.app.rm.TaskSchedulerManager] |ipc.Client|: Retrying connect to server: 0.0.0.0/0.0.0.0:8030. few troubleshoots i have done, checked the yarn-site.xml file on all instances hostname and all three addresses are mentioned <configuration> <property> <name>yarn.resourcemanager.hostname</name> <value>node1</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>node1:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>node1:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>node1:8031</value> </property> <property> <name>yarn.nodemanager.address</name> <value>node1:59392</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.nodemanager.resource.memory-mb</name> <value>124491</value> </property> <property> <name>yarn.nodemanager.resource.cpu-vcores</name> <value>125</value> </property> <property> <name>yarn.scheduler.maximum-allocation-mb</name> <value>50115</value> </property> <property> <name>yarn.scheduler.maximum-allocation-vcores</name> <value>54</value> </property> </configuration> also checked telnet node1 8030 this is working ping node1 this also works checked /etc/hosts this also seems to be fine
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Tez
-
Apache YARN
-
HDFS