Member since
07-21-2016
101
Posts
10
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2364 | 02-15-2020 05:19 PM | |
43346 | 10-02-2017 08:22 PM | |
740 | 09-28-2017 01:55 PM | |
947 | 07-25-2016 04:09 PM |
02-15-2020
05:19 PM
Alright....all good now...the problem was with AD....in our Environment..KDC is AD..in AD there are 2 field names "User logon name" and "user logon name(pre windows 2000)" . Usually the value of these attributes are same..In this case, all the user names were generated automatically when we kerberize the cluster..for these user names "user logon name" and "user logon name(pre windows 2000)" were different. The "user logon name(pre windows 2000" was an 20 character alphanumeric. IN kerberized cluster, the service accounts has to impersonate all Hadoop service accounts like "nn', "dn","rm". So we edited all the service accounts in AD i,e "user logon name(pre windows 2000)" were made to be same as "User logon name" . IN HDFS config...there is a property "Auth_to_Local mappings". We added rules to convert the pattern(service account name in AD) to local service users (hdfs, nn, hive, dn ..etc etc)
... View more
02-12-2020
11:34 AM
@prasanna_santha I am not sure if you had resolved this. I had the same issue today. From the MR logs, it clearly says which gz file has issues. I just went ahead and removed the file and proceed with the job
... View more
02-08-2020
07:39 AM
@Shelton from the previous posts, i see that you have dealt with these issues..Any idea ?? Also in the journal logs , is see the following: ************ 2020-02-08 15:33:58,011 INFO server.KerberosAuthenticationHandler (KerberosAuthenticationHandler.java:init(262)) - Login using keytab /etc/security/keytabs/spnego.service.keytab, for principal HTTP/node2.prod.iad.XXXXXXXXXXX.XXXXXXXXXXX.io@XXXXXXXXXXX.CORP 2020-02-08 15:33:58,018 INFO server.KerberosAuthenticationHandler (KerberosAuthenticationHandler.java:init(281)) - Map server: node2.prod.iad.XXXXXXXXXXX.XXXXXXXXXXX.io to principal: [HTTP/node2.prod.iad.XXXXXXXXXXX.XXXXXXXXXXX.io@XXXXXXXXXXX.CORP], added = true 2020-02-08 15:33:58,034 INFO mortbay.log (Slf4jLog.java:info(67)) - Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:8480 2020-02-08 15:33:58,146 INFO ipc.CallQueueManager (CallQueueManager.java:<init>(75)) - Using callQueue: class java.util.concurrent.LinkedBlockingQueue scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler 2020-02-08 15:33:58,164 INFO ipc.Server (Server.java:run(821)) - Starting Socket Reader #1 for port 8485 2020-02-08 15:33:58,402 INFO ipc.Server (Server.java:run(1064)) - IPC Server Responder: starting 2020-02-08 15:33:58,403 INFO ipc.Server (Server.java:run(900)) - IPC Server listener on 8485: starting 2020-02-08 15:34:19,823 INFO ipc.Server (Server.java:saslProcess(1573)) - Auth successful for $7D8300-H79FE35P680K@XXXXXXXXXXX.CORP (auth:KERBEROS) 2020-02-08 15:34:19,874 INFO ipc.Server (Server.java:authorizeConnection(2235)) - Connection from XX.XX.48.17:43312 for protocol org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol is unauthorized for user nn/admin2.prod.iad.XXXXXXXXXXX.XXXXXXXXXXX.io@XXXXXXXXXXX.CORP (auth:PROXY) via $7D8300-H79FE35P680K@XXXXXXXXXXX.CORP (auth:KERBEROS) 2020-02-08 15:34:19,875 INFO ipc.Server (Server.java:doRead(1006)) - Socket Reader #1 for port 8485: readAndProcess from client XX.XX.48.17 threw exception [org.apache.hadoop.security.authorize.AuthorizationException: User: $7D8300-H79FE35P680K@XXXXXXXXXXX.CORP is not allowed to impersonate nn/admin2.prod.iad.XXXXXXXXXXX.XXXXXXXXXXX.io@XXXXXXXXXXX.CORP]
... View more
02-07-2020
10:27 PM
2020-02-08 06:24:59,879 INFO ipc.Server (Server.java:saslProcess(1573)) - Auth successful for $7D8300-H79FE35P680K@XXXXXXXX.CORP (auth:KERBEROS) 2020-02-08 06:24:59,880 INFO ipc.Server (Server.java:authorizeConnection(2235)) - Connection from XX.XX.48.17:44290 for protocol org.apache.hadoop.ha.HAServiceProtocol is unauthorized for user nn/admin2.XXXXXXXXXX.io@XXXXXXXX.CORP (auth:PROXY) via $7D8300-H79FE35P680K@XXXXXXX.CORP (auth:KERBEROS)
... View more
02-07-2020
10:23 PM
Hi,
I had to reboot one of my data node. While rebooting I came to know the data node that I am rebooting also acts as Journal node.
The cluster is kerberized and HA enabled. After reboot, both Namenodes are coming as Standby. I tried all rebooting methods, no luck..
Here are the zookeeper Logs:
2020-02-07 20:22:47,823 WARN ha.HealthMonitor (HealthMonitor.java:doHealthChecks(211)) - Transport-level exception trying to monitor health of NameNode at admin1.XXXX.io/XX.4.48.11:8020: java.net.SocketTimeoutException: 45000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/XX.XX.48.11:43263 remote=admin1.XXXX.io/XX.XX.48.11:8020] Call From admin1.XXXX.io/XX.XX.48.11 to admin1.XXXX.io:8020 failed on socket timeout exception: java.net.SocketTimeoutException: 45000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/XX.XX.48.11:43263 remote=admin1.XXXX.io/XX.XX.48.11:8020]; For more details see: http://wiki.apache.org/hadoop/SocketTimeout 2020-02-07 20:22:47,823 INFO ha.HealthMonitor (HealthMonitor.java:enterState(249)) - Entering state SERVICE_NOT_RESPONDING 2020-02-07 20:23:01,410 FATAL ha.ZKFailoverController (ZKFailoverController.java:becomeActive(401)) - Couldn't make NameNode at admin1.XXXX.io/XX.XX.48.11:8020 active java.io.EOFException: End of File Exception between local host is: "admin1.XXXX.io/XX.XX.48.11"; destination host is: "admin1.XXXX.io":8020; : java.io.EOFException; For more details see: http://wiki.apache.org/hadoop/EOFException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:801) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:765) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1558) at org.apache.hadoop.ipc.Client.call(Client.java:1498) at org.apache.hadoop.ipc.Client.call(Client.java:1398) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233) at com.sun.proxy.$Proxy9.transitionToActive(Unknown Source) at org.apache.hadoop.ha.protocolPB.HAServiceProtocolClientSideTranslatorPB.transitionToActive(HAServiceProtocolClientSideTranslatorPB.java:100) at org.apache.hadoop.ha.HAServiceProtocolHelper.transitionToActive(HAServiceProtocolHelper.java:48) at org.apache.hadoop.ha.ZKFailoverController.becomeActive(ZKFailoverController.java:390) at org.apache.hadoop.ha.ZKFailoverController.access$900(ZKFailoverController.java:61) at org.apache.hadoop.ha.ZKFailoverController$ElectorCallbacks.becomeActive(ZKFailoverController.java:880) at org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:864) at org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:468) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:611) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) Caused by: java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1119) at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1014) 2020-02-07 20:23:01,411 WARN ha.ActiveStandbyElector (ActiveStandbyElector.java:becomeActive(868)) - Exception handling the winning of election org.apache.hadoop.ha.ServiceFailedException: Couldn't transition to active
Name node Logs:
***********************
2020-02-08 06:18:55,184 INFO ipc.Server (Server.java:authorizeConnection(2235)) - Connection from XX.XX.48.11:34267 for protocol org.apache.hadoop.ha.HAServiceProtocol is unauthorized for user nn/admin1.XXXXXXXXX.io@XXXXXXXXXXX.CORP (auth:PROXY) via $FC8300-R8HGK424M6OR@XXXXXXXXXXX.CORP (auth:KERBEROS) 2020-02-08 06:18:55,185 INFO ipc.Server (Server.java:doRead(1006)) - Socket Reader #1 for port 8020: readAndProcess from client XX.XX.48.11 threw exception [org.apache.hadoop.security.authorize.AuthorizationException: User: $FC8300-R8HGK424M6OR@XXXXXXXXXXX.CORP is not allowed to impersonate nn/admin1.XXXXXXXXX.io@XXXXXXXXXXX.CORP] 2020-02-08 06:18:56,190 INFO ipc.Server (Server.java:saslProcess(1573)) - Auth successful for $FC8300-R8HGK424M6OR@XXXXXXXXXXX.CORP (auth:KERBEROS) 2020-02-08 06:18:56,191 INFO ipc.Server (Server.java:authorizeConnection(2235)) - Connection from XX.XX.48.11:41305 for protocol org.apache.hadoop.ha.HAServiceProtocol is unauthorized for user nn/admin1.XXXXXXXXX.io@XXXXXXXXXXX.CORP (auth:PROXY) via $FC8300-R8HGK424M6OR@XXXXXXXXXXX.CORP (auth:KERBEROS) 2020-02-08 06:18:56,191 INFO ipc.Server (Server.java:doRead(1006)) - Socket Reader #1 for port 8020: readAndProcess from client XX.XX.48.11 threw exception [org.apache.hadoop.security.authorize.AuthorizationException: User: $FC8300-R8HGK424M6OR@XXXXXXXXXXX.CORP is not allowed to impersonate nn/admin1.XXXXXXXXX.io@XXXXXXXXXXX.CORP] 2020-02-08 06:18:57,197 INFO ipc.Server (Server.java:saslProcess(1573)) - Auth successful for $FC8300-R8HGK424M6OR@XXXXXXXXXXX.CORP (auth:KERBEROS) 2020-02-08 06:18:57,198 INFO ipc.Server (Server.java:authorizeConnection(2235)) - Connection from XX.XX.48.11:44308 for protocol org.apache.hadoop.ha.HAServiceProtocol is unauthorized for user nn/admin1.XXXXXXXXX.io@XXXXXXXXXXX.CORP (auth:PROXY) via $FC8300-R8HGK424M6OR@XXXXXXXXXXX.CORP (auth:KERBEROS) 2020-02-08 06:18:57,198 INFO ipc.Server (Server.java:doRead(1006)) - Socket Reader #1 for port 8020: readAndProcess from client XX.XX.48.11 threw exception [org.apache.hadoop.security.authorize.AuthorizationException: User: $FC8300-R8HGK424M6OR@XXXXXXXXXXX.CORP is not allowed to impersonate nn/admin1.XXXXXXXXX.io@XXXXXXXXXXX.CORP] 2020-02-08 06:18:58,204 INFO ipc.Server (Server.java:saslProcess(1573)) - Auth successful for $FC8300-R8HGK424M6OR@XXXXXXXXXXX.CORP (auth:KERBEROS) 2020-02-08 06:18:58,205 INFO ipc.Server (Server.java:authorizeConnection(2235)) - Connection from XX.XX.48.11:40352 for protocol org.apache.hadoop.ha.HAServiceProtocol is unauthorized for user nn/admin1.XXXXXXXXX.io@XXXXXXXXXXX.CORP (auth:PROXY) via $FC8300-R8HGK424M6OR@XXXXXXXXXXX.CORP (auth:KERBEROS) 2020-02-08 06:18:58,205 INFO ipc.Server (Server.java:doRead(1006)) - Socket Reader #1 for port 8020: readAndProcess from client XX.XX.48.11 threw exception [org.apache.hadoop.security.authorize.AuthorizationException: User: $FC8300-R8HGK424M6OR@XXXXXXXXXXX.CORP is not allowed to impersonate nn/admin1.XXXXXXXXX.io@XXXXXXXXXXX.CORP] 2020-02-08 06:18:59,211 INFO ipc.Server (Server.java:saslProcess(1573)) - Auth successful for $FC8300-R8HGK424M6OR@XXXXXXXXXXX.CORP (auth:KERBEROS) 2020-02-08 06:18:59,211 INFO ipc.Server (Server.java:authorizeConnection(2235)) - Connection from XX.XX.48.11:39431 for protocol org.apache.hadoop.ha.HAServiceProtocol is unauthorized for user nn/admin1.XXXXXXXXX.io@XXXXXXXXXXX.CORP (auth:PROXY) via $FC8300-R8HGK424M6OR@XXXXXXXXXXX.CORP (auth:KERBEROS) 2020-02-08 06:18:59,212 INFO ipc.Server (Server.java:doRead(1006)) - Socket Reader #1 for port 8020: readAndProcess from client XX.XX.48.11 threw exception [org.apache.hadoop.security.authorize.AuthorizationException: User: $FC8300-R8HGK424M6OR@XXXXXXXXXXX.CORP is not allowed to impersonate nn/admin1.XXXXXXXXX.io@XXXXXXXXXXX.CORP]
... View more
Labels:
- Labels:
-
Apache Hadoop
04-15-2019
09:49 PM
@Dhiraj Sardana Did you get this fixed ? If so please share the solution. Thanks Kumar
... View more
04-15-2019
08:23 PM
was there a case raised for this issue ? I am in the same situation and I need help with this. thanks Kumar
... View more
01-25-2019
03:03 PM
I have requirement to modify the email subject from Ambari for certain alerts. For example, anything related to HDFS, I need to have "CRITICAL HDFS" in the subject line. I went through this link : https://community.hortonworks.com/questions/198815/ambari-email-alert-format.html but it does not talk about customizing the subject for certain alerts. Has anyone has this setup in your environment ?
... View more
Labels:
11-27-2018
05:56 PM
In my cluster I have spark1.6 and I am trying to add spark2. I was able to add the service but spark2 service does not come up.. here is an extract from the logs : Not sure why its checking at spark1.6 and complaining its already up and running resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/hdp/current/spark2-historyserver/sbin/start-history-server.sh' returned 1. org.apache.spark.deploy.history.HistoryServer running as process 39259. Stop it first
... View more
Labels:
- Labels:
-
Apache Spark
10-23-2018
03:23 AM
I just opened this file and saw the following header : Path Replication Modification TimeAccess TimePreferred BlockSizeBlocks CountFileSize NSQUOTA DSQUOTA Permission UserName GroupName
... View more
10-23-2018
03:19 AM
Recently I started seeing few huge files tmp.**** in the local /tmp. I started getting file system alert from OS. These files are owned by hdfs user. Any insights ?
... View more
Labels:
- Labels:
-
Apache Hadoop
09-06-2018
02:35 PM
We upgraded our cluster last week to HDP 2.6.5.0 since then Hiveserver2 went down twice due to Java Heap size. Currently "Hiveserver2 Heap Size" is set to 512MB. I am looking for any documentation that says minimum recommendation from HortonWorks on configs specific for versions. Thanks Kumar
... View more
Labels:
09-05-2018
03:47 PM
@Akhil S Naik Thank you. Looks like my cluster is not Rack aware. ANy help on my second question is highly appreciated.
... View more
09-05-2018
01:56 PM
I got a new environment and I just got access to ambari. Is there a way I can find out if Rack Topology has been implemented just by looking into ambari ? The reason I am asking is, I get constant Load Average alerts from a particular box and I found that box has lesser CPU and Memory. Should not Resource manager sent lesser tasks to that particular node ? thanks Kumar
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache YARN
-
Cloudera Manager
08-15-2018
07:55 PM
My cluster is kerberized and I am able to access HIVE through beeline from any of the node inside the cluster. I use the following command to connect : beeline -u 'jdbc:hive2://<Node Name>:10000/default;principal=hive/<Node Name>@MYCOMPANY.CORP' What I am trying to do is to connect to HIVE from my laptop(Mac OS). In my MAC, I did "kinit <myuser name>" and it generated a ticket for me, but I do not know where it is stored or where the cache is present. I have beeline client in my laptop and I tried the same command: beeline -u 'jdbc:hive2://<Node Name>:10000/default;principal=hive/<Node Name>@MYCOMPANY.CORP' It kicks me out saying "Can't get Kerberos realm (state=08S01,code=0)" Has anyone connected to HIVE using beeline using kerberos authentication ? Thanks Kumar
... View more
Labels:
- Labels:
-
Apache Hive
08-13-2018
07:30 PM
Hi, I need to access HIVESERVER2 from my laptop through beeline client. How do I install beeline client ? Mine is Mac OS. i tried "brew install beeline", looks like thats not right. Let me know if someone has same setup. Thanks Kumar
... View more
Labels:
- Labels:
-
Apache Hive
07-16-2018
11:36 PM
@rguruvannagari @Jay Kumar SenSharma In my environment hiveserver 2 authentication is Kerberos and not LDAP. Anyways I just typed some word( as dummy password) in both the text boxes and restarted the hive service and it does not complain
... View more
07-13-2018
09:34 PM
My Ambari is not tied to LDAP and my HDP version is 2.6. Under HIVE Config I see the following alert "LDAP password for Alerts" , Please refer to the screenshot: screen-shot-2018-07-13-at-42527-pm.png I am really not sure what password I am supposed provide here. Please advise.
... View more
- Tags:
- Ambari
- Hadoop Core
Labels:
- Labels:
-
Apache Ambari
07-13-2018
06:47 PM
@Vinicus Higa Murakami...it worked I just had to do one more extra step....IN my case I converted my *.crt file into *.pem file and then I used this *.pem file to generate jks. Thanks Kumar
... View more
07-09-2018
05:03 PM
The goal here is I need to import data from MS SQL Server database to HDFS. The connectivity between Hadoop Cluster and MS SQL Server works fine. I confirmed this by TELNETing to port 1433. I am also able to --list-tables. [root@api1.dev ~]# sudo -u XXXXXXX /usr/hdp/current/sqoop-client/bin/sqoop list-tables --connect "jdbc:sqlserver://XX.XX.XXX.XXX:1433;database=XXXXXXXXXX;username=XXXXXXXX;password=XXXXXXXX"
Warning: /usr/hdp/2.6.4.0-91/hbase does not exist! HBase imports will fail.
Please set $HBASE_HOME to the root of your HBase installation.
Warning: /usr/hdp/2.6.4.0-91/accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
18/07/09 16:44:43 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6.2.6.4.0-91
18/07/09 16:44:43 INFO manager.SqlManager: Using default fetchSize of 1000
XXXXXXXXXXXXXX<the table name >
DBAs have enabled SSL Encryption on the database side and they have shared the SSL Cert asking us to use when we pull the data out of database. I did go through this link https://docs.microsoft.com/en-us/sql/connect/jdbc/connecting-with-ssl-encryption?view=sql-server-2017 on JDBC Documentation. and here is the command that I have arrived [root@api1.dev ~]# sudo -u XXXXXXXX /usr/hdp/current/sqoop-client/bin/sqoop import --connect "jdbc:sqlserver://XX.XXX.XXXX.XXX:1433;database=XXXXXXXXXX;username=XXXXXXXXXX;password=XXXXXXXX;encrypt=true;trustServerCertificate=false;trustStore=/etc/pki/CA/certs/XXXXXXXXXXXXX.crt" --table XXXXXXXXXX --fields-terminated-by , --escaped-by \\ --enclosed-by '"' --compress -m 1 --target-dir /user/XXXXXXXXXXXX/ --append --hive-drop-import-delims -- --schema dbo --table-hints NOLOCK Here is the exception that I get INFO: java.security path: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64/jre/lib/security
Security providers: [SUN version 1.8, SunRsaSign version 1.8, SunEC version 1.8, SunJSSE version 1.8, SunJCE version 1.8, SunJGSS version 1.8, SunSASL version 1.8, XMLDSig version 1.8, SunPCSC version 1.8]
KeyStore provider info: SUN (DSA key/parameter generation; DSA signing; SHA-1, MD5 digests; SecureRandom; X.509 certificates; JKS & DKS keystores; PKIX CertPathValidator; PKIX CertPathBuilder; LDAP, Collection CertStores, JavaPolicy Policy; JavaLoginConfig Configuration)
java.ext.dirs: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64/jre/lib/ext:/usr/java/packages/lib/ext
18/07/09 16:50:26 ERROR manager.SqlManager: Error executing statement: com.microsoft.sqlserver.jdbc.SQLServerException: The driver could not establish a secure connection to SQL Server by using Secure Sockets Layer (SSL) encryption. Error: "Invalid keystore format". ClientConnectionId:daf3f972-6029-4629-8817-7bb8ac260c5c
com.microsoft.sqlserver.jdbc.SQLServerException: The driver could not establish a secure connection to SQL Server by using Secure Sockets Layer (SSL) encryption. Error: "Invalid keystore format". ClientConnectionId:daf3f972-6029-4629-8817-7bb8ac260c5c
at com.microsoft.sqlserver.jdbc.SQLServerConnection.terminate(SQLServerConnection.java:1667)
at com.microsoft.sqlserver.jdbc.TDSChannel.enableSSL(IOBuffer.java:1668)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectHelper(SQLServerConnection.java:1323)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.login(SQLServerConnection.java:991)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connect(SQLServerConnection.java:827)
at com.microsoft.sqlserver.jdbc.SQLServerDriver.connect(SQLServerDriver.java:1012)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:270)
at org.apache.sqoop.manager.SqlManager.makeConnection(SqlManager.java:902)
at org.apache.sqoop.manager.GenericJdbcManager.getConnection(GenericJdbcManager.java:52)
at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:763)
at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:786)
at org.apache.sqoop.manager.SqlManager.getColumnInfoForRawQuery(SqlManager.java:289)
at org.apache.sqoop.manager.SqlManager.getColumnTypesForRawQuery(SqlManager.java:260)
at org.apache.sqoop.manager.SqlManager.getColumnTypes(SqlManager.java:246)
at org.apache.sqoop.manager.ConnManager.getColumnTypes(ConnManager.java:328)
at org.apache.sqoop.orm.ClassWriter.getColumnTypes(ClassWriter.java:1853)
at org.apache.sqoop.orm.ClassWriter.generate(ClassWriter.java:1653)
at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:107)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:488)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:615)
at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:225)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
at org.apache.sqoop.Sqoop.main(Sqoop.java:243)
Caused by: java.io.IOException: Invalid keystore format
at sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:658)
at sun.security.provider.JavaKeyStore$JKS.engineLoad(JavaKeyStore.java:56)
at sun.security.provider.KeyStoreDelegator.engineLoad(KeyStoreDelegator.java:224)
at sun.security.provider.JavaKeyStore$DualFormatJKS.engineLoad(JavaKeyStore.java:70)
at java.security.KeyStore.load(KeyStore.java:1445)
at com.microsoft.sqlserver.jdbc.TDSChannel.enableSSL(IOBuffer.java:1525)
... 25 more
After few more reading it is said that the key needs to be converted into jks format Has anybody been in this situation ?
... View more
Labels:
- Labels:
-
Apache Sqoop
06-26-2018
09:51 PM
I am running a simple sqoop import command which in turn runs a map reduce. I am testing this after standing up the cluster...I see this error from the logs : [2018-06-26 21:02:59,672] {bash_operator.py:76} INFO - main : requested yarn user is ods_archive
[2018-06-26 21:02:59,673] {bash_operator.py:76} INFO - Path /disk1/hadoop/yarn/local/usercache/ods_archive/appcache/application_1530031055103_0009 has permission 700 but needs permission 750.
[2018-06-26 21:02:59,673] {bash_operator.py:76} INFO - Path /disk2/hadoop/yarn/local/usercache/ods_archive/appcache/application_1530031055103_0009 has permission 700 but needs permission 750.
[2018-06-26 21:02:59,673] {bash_operator.py:76} INFO - Path /disk3/hadoop/yarn/local/usercache/ods_archive/appcache/application_1530031055103_0009 has permission 700 but needs permission 750.
[2018-06-26 21:02:59,674] {bash_operator.py:76} INFO - Path /disk4/hadoop/yarn/local/usercache/ods_archive/appcache/application_1530031055103_0009 has permission 700 but needs permission 750.
[2018-06-26 21:02:59,674] {bash_operator.py:76} INFO - Path /disk5/hadoop/yarn/local/usercache/ods_
archive/appcache/application_1530031055103_0009 has permission 700 but needs permission 750.
[2018-06-26 21:02:59,674] {bash_operator.py:76} INFO - Path /disk6/hadoop/yarn/local/usercache/ods_archive/appcache/application_1530031055103_0009 has permission 700 but needs permission 750.
After checking the file system I see the directories do not have execute permissions :
[root@node3.dev appcache]# pwd
/disk1/hadoop/yarn/local/usercache/ods_archive/appcache
drwx--S--- 4 ods_archive hadoop 4096 Jun 26 19:20 application_1530031055103_0005
drwx--S--- 4 ods_archive hadoop 4096 Jun 26 19:26 application_1530031055103_0006
drwx--S--- 4 ods_archive hadoop 4096 Jun 26 19:38 application_1530031055103_0008
drwx--S--- 4 ods_archive hadoop 4096 Jun 26 21:04 application_1530031055103_0009
drwx--S--- 4 ods_archive hadoop 4096 Jun 26 21:10 application_1530031055103_0010
drwx--S--- 4 ods_archive hadoop 4096 Jun 26 21:16 application_1530031055103_0011
drwx--S--- 4 ods_archive hadoop 4096 Jun 26 21:23 application_1530031055103_0012
Am I missing somewhere to set the permissions ?
... View more
- Tags:
- Hadoop Core
- YARN
Labels:
- Labels:
-
Apache YARN
06-13-2018
06:29 PM
Team, I build a new cluster and we have jobs to pull data out of MS SQL Server. MS SQL Server listens on port 1433 and our Network Security team has denied to open firewall between our Hadoop Cluster and MS SQL Server saying that port 1433 is a non secure port. MS SQL DBAs said that they cannot enable SSL on the DB side because other applications(legacy) would not be able to connect to MS SQL Server. Now from hadoop side we need to ensure our connections are secure. Has anybody faced this situation ? thanks Kumar
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Sqoop
04-24-2018
03:30 PM
@Jay Kumar Sensharma Thanks for your reply. Looks like there were stale alerts. All the alerts went away after I restarted the ambari agents
... View more
04-24-2018
02:09 PM
The nodes in my cluster does not have direct access to Internet and so we were given proxies. I am referring to the document: https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.0.0/bk_ambari-administration/content/ch_setting_up_an_internet_proxy_server_for_ambari.html I am trying to add no_proxies for few nodes, but as soon as add more than one node with pipe delimmitedf, ambari server does not restarts. Has anybody faced this situation ?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
04-24-2018
01:40 PM
I have installed a brand new cluster and all the services are up and running. But Ambari shows alert for all Web UIs. Here are few example : DataNode web UI Connection failed to http://node1.dev.XXXXXXXX.io:50075 (timed out) Resource Manager Web UI Connection failed to http://api1.dev.XXXXXXX.io:8088 (timed out) Anybody has any idea on which logs Do I need to check to get more details..?
... View more
Labels:
- Labels:
-
Apache Hadoop
04-17-2018
05:06 PM
In my PROD environment, Infrastructure team is going to patch Top of the Rack switches and I came to know we do not have HA enabled( from switch side). My understanding is my cluster would not function with switch going down. Am I correct ? Also, what are the services I need to stop? Thanks Kumar
... View more
Labels:
- Labels:
-
Apache Hadoop
02-26-2018
08:01 PM
We have daily jobs that loads data into HIVE from MS SQL. Small tables are TRUNCATED and LOADED everyday. Can someone explain what the following command does ? hive -e 'alter table $db_name.$subject touch'" Please let me know if you need more details. Thanks Kumar
... View more
- Tags:
- Data Processing
- Hive
Labels:
- Labels:
-
Apache Hive
02-09-2018
11:49 AM
Folks, this may be a simple question. what is difference between the below two : 1) yarn jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar pi 10 10000; 2) hadoop jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar pi 10 10000; I did run both and compared the logs, still same. In fact both took same amount of time. Thanks Kumar
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache YARN
10-10-2017
07:43 PM
we have few application connecting to our cluster through name node. Currently they are connecting to the active name node. but when failover happens they would have to modify their connection string manually. is there a better way to handle this situation without any manual intervention ?
... View more
Labels: