Member since
09-24-2015
178
Posts
113
Kudos Received
28
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
625 | 05-25-2016 02:39 AM | |
1119 | 05-03-2016 01:27 PM | |
152 | 04-26-2016 07:59 PM | |
5929 | 03-24-2016 04:10 PM | |
385 | 02-02-2016 11:50 PM |
01-20-2018
12:22 AM
Very helpful response Yolanda. One minor typo - In 1.a) the path is - /etc/yum.repos.d/ambari-hdp-*.repo For those who are not familiar with yum repo location. BTW - This solutuion works for me.
... View more
03-17-2017
11:34 PM
I am attempting to create and use a Phoenix table on HBase table that was originally created from Hive using HBaseStorageHandler. However, I am getting an error when selecting data from phoenix table.
Hive Table DDL create table MYTBL(
col1 string,
col2 int,
col3 int,
col4 string )
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES
("hbase.columns.mapping" = ":key,
attr:col2,
attr:col3,
attr:col4")
TBLPROPERTIES("hbase.table.name" = "MYTBL");
Phoenix Table DDL CREATE TABLE "MYTBL" (
pk VARCHAR PRIMARY KEY,
"attr"."col2" INTEGER,
"attr"."col3" INTEGER,
"attr"."col4" VARCHAR ) Once both the tables are created, I insert the data into Hive table using - insert into table MYTBL values ("hive", 1, 2, "m"); At this point, the data is available in Hive table and underlying HBase table. HBase table shown below I can also insert data into Phoenix table and it shows up in underlying HBase table. upsert into "MYTBL" values ('phoenix', 3, 4, 'm+c'); One thing to note here is how the integer values are being stored for the data inserted through Phoenix. When I run a select query from Phoenix, it gives an error while parsing the integer field inserted from Hive -> HBase. Text version of the error below - p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo}
p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #c33720}
span.s1 {font-variant-ligatures: no-common-ligatures}
span.Apple-tab-span {white-space:pre} 0: jdbc:phoenix:> select * from MYTBL; Error: ERROR 201 (22000): Illegal data. Expected length of at least 4 bytes, but had 2 (state=22000,code=201) java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at least 4 bytes, but had 2 at org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:441) at org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145) at org.apache.phoenix.schema.KeyValueSchema.next(KeyValueSchema.java:211) at org.apache.phoenix.schema.KeyValueSchema.iterator(KeyValueSchema.java:165) at org.apache.phoenix.schema.KeyValueSchema.iterator(KeyValueSchema.java:171) at org.apache.phoenix.schema.KeyValueSchema.iterator(KeyValueSchema.java:175) at org.apache.phoenix.expression.ProjectedColumnExpression.evaluate(ProjectedColumnExpression.java:114) at org.apache.phoenix.compile.ExpressionProjector.getValue(ExpressionProjector.java:69) at org.apache.phoenix.jdbc.PhoenixResultSet.getString(PhoenixResultSet.java:608) at sqlline.Rows$Row.<init>(Rows.java:183) at sqlline.BufferedRows.<init>(BufferedRows.java:38) at sqlline.SqlLine.print(SqlLine.java:1650) at sqlline.Commands.execute(Commands.java:833) at sqlline.Commands.sql(Commands.java:732) at sqlline.SqlLine.dispatch(SqlLine.java:808) at sqlline.SqlLine.begin(SqlLine.java:681) at sqlline.SqlLine.start(SqlLine.java:398) at sqlline.SqlLine.main(SqlLine.java:292)
... View more
Labels:
02-14-2017
06:54 PM
Is it possible to store the timestamp value in a column in Hive without the timezone part? For e.g. Oracle supports the timestamp value without a timezone attached to it - https://docs.oracle.com/cd/B19306_01/server.102/b14225/ch4datetime.htm#i1006760 The requirement simply is that store whatever value of the timestamp is given in a column. Currently, Hive automatically applies Day Light Savings adjustment based on the timezone value. Any inputs are appreciated. Thanks
... View more
Labels:
02-09-2017
05:59 PM
Which timezone setting (OS/Hadoop Core/HDFS/Hive etc.) is used as the default timezone for Hive? Which configuration / property to change to use a different timezone?
... View more
Labels:
01-10-2017
07:56 PM
1 Kudo
@Raj B I pondered over this when started using NiFi and realized that this is good feature; In fact almost necessary to support a real-life data flow scenario. Let us take a simple scenario - Lets say my group does risk evaluation in a bank and provides services to different LOB (consider credit & debit transaction groups only for this discussion) within the bank. Lets assume that the format of the transaction received from these two groups is exactly the same. However, the way data is received is different. While the Credit group places the data on a windows share, the Debit group requires the data to be read from their server via FTP. Now the data ingestion process of risk group, built on NiFi, will look something like this -
A process group to read the data from shared drive A process group to read the data using FTP Another process group to take the input from the above two process groups and apply some further processing like - split records into individual transactions, ensure all mandatory data elements are present (if not, route to error) and then do two things
A process group with flow to place the data on Kafka for the Storm topology to pick up and apply the model to evaluate risk Also, another process group with flow to store the data in HDFS for archival purpose to support audit, compliance requirement. Now as you can see you would need to be able to support multiple input ports and output ports to support this flow. Why cant we just place the entire flow? - Technically you can but wont that be messy, hard to manage, reduce the reusability drastically and make the overall flow less flexible / scalable. Hope this helps!
... View more
08-17-2016
03:21 PM
1 Kudo
I am using PutSyslog and need to pass in the content of the flow file. Q 1 - Is there a way to reference the content of flow file directly within the MessageBody field of the PutSyslog processor? Q 2 - If not, how do I add the content of the flow file as processor so that I can pass the attribute to the MessageBody?
... View more
Labels:
07-20-2016
11:23 PM
Note: Also moved the question to Governance track, since its Atlas related.
... View more
07-20-2016
11:22 PM
2 Kudos
@Mark Heydenrych Atlas requires JDK to be present (in addition to JRE) so make sure that you have javac available. Same issue in a new installation was resolved by installing the JDK. So if you are using OpenJDK then install OpenJDK-devel. Hope this helps!
... View more
06-17-2016
08:47 PM
1 Kudo
Based on my read, snapshot is done in the namenode and only some key information is captured in them. The actual blocks in datanodes are not copied. The Snapshot files record the block list and file size and no data copying is done. In the event we delete data from a particular directory in HDFS, how will we be able to recover the data? Is it that data is never actually deleted?
... View more
Labels:
05-25-2016
02:39 AM
3 Kudos
@Sri Bandaru In ref to kerbeors, it is better to create hadoop accounts locally to avoid sending hadoop internal auth requests to AD and add to the AD load. Setting up hadoop accounts locally in a KDC and setting up one way trust between KDC and AD is the way to go.
... View more
05-18-2016
01:34 AM
+1 I am looking for the same information. Can someone also share the following pieces of information Sample code using Spark HBase Connector Link to latest documentation Is this GA yet?
... View more
05-03-2016
01:38 PM
1 Kudo
Fix is to delete that link manually and recreate it correctly. rm /usr/hdp/current/zookeeper-client/conf
ln -s /etc/zookeeper/2.3.2.0-2950/0 /usr/hdp/current/zookeeper-client/conf
... View more
05-03-2016
01:27 PM
4 Kudos
@Felix Karabanov
I recently experience something similar during an upgrade. I am not sure what the root cause within the software but here is what Ambari is expecting and work around that worked for me - Correct setup - The configuration directories are expected to be setup this way - Starting with /etc/zookeeper/conf [root@sandbox zookeeper]# ll /etc/zookeeper/conf
lrwxrwxrwx 1 root root 38 2015-10-27 14:31 /etc/zookeeper/conf -> /usr/hdp/current/zookeeper-client/conf
[root@sandbox zookeeper]# ll /usr/hdp/current/zookeeper-client/conf
lrwxrwxrwx 1 root root 29 2015-10-27 14:31 /usr/hdp/current/zookeeper-client/conf -> /etc/zookeeper/2.3.2.0-2950/0
In your case, the link /usr/hdp/current/zookeeper-client/conf probably points back to /etc/zookeeper/conf, which causes the issue.
... View more
04-26-2016
07:59 PM
@Sunile Manjee The dependencies are derived based on the entity description, once you create those entities using Falcon (UI or CLI). So for e.g., you define your cluster in the cluster entity xml, you specify the name.. <cluster colo="location1" description="primaryDemoCluster" name="primaryCluster" xmlns="uri:falcon:cluster:0.1"> When you define this cluster in a feed entity, the dependency gets created when you create the feed entity.. <feed description="Demo Input Data" name="demoEventData" xmlns="uri:falcon:feed:0.1">
<tags>externalSystem=eventData,classification=clinicalResearch</tags>
<groups>events</groups>
<frequency>minutes(3)</frequency>
<timezone>GMT+00:00</timezone>
<late-arrival cut-off="hours(4)"/>
<clusters>
<cluster name="primaryCluster" type="source">
<validity start="2015-08-10T08:00Z" end="2016-02-08T22:00Z"/>
<retention limit="days(5)" action="delete"/>
</cluster>
</clusters>
The same concept applies to processes to feed dependencies.. Take a look at this example for working set of falcon entities - https://github.com/sainib/hadoop-data-pipeline/tree/master/falcon
... View more
04-14-2016
01:19 PM
@Kuldeep Kulkarni
It applies to both (MR and Sqoop) but the job in question is being kicked off using SQOOP. It is more critical for the jobs that are making the connection to external systems like SQL Server as this whole environment is kerberized and without the user's kerberos ticket (which will happen only if the job is running under that user's id) Now as per this documentation - https://hadoop.apache.org/docs/r2.7.2/hadoop-yarn/hadoop-yarn-site/SecureContainer.html ... "YARN containers in a secure cluster use the operating system facilities to offer execution isolation for containers. Secure containers execute under the credentials of the job user. The operating system enforces access restriction for the container. The container must run as the user that submitted the application."
... View more
04-14-2016
03:04 AM
1 Kudo
I am working in a kerberized cluster where a user is submitting a SQOOP job on the edge node. The actual MR job on the worker nodes run under the user 'yarn'. Is there any way we can configure YARN to use the end user's user id for launching the MR process on the worker nodes?
... View more
Labels:
04-13-2016
05:30 PM
Try the same command after performing kinit - kinit -kt <PATH_TO_KEYTAB> <YOUR-PRINCIPAL-ID>
... View more
04-05-2016
11:34 PM
1 Kudo
Connection Pooling from a single app is pretty easy and straight forward but what is the best way to manage connection pool for a storm topology. Since every bolt may potentially run in a separate node & JVM, creating a connection pool for each container could result in many open connections which could be an issue. Is there a way to share connection pool for a topology? Are there any best practices / lessons learns, anyone would like to share?
... View more
Labels:
03-24-2016
06:13 PM
@Mark White The oozie logs may provide more details. This sounds like a curl issue.. Can you provide more details on what you are doing in the workflow? If possible, attach the log files. Also, Are you using Centrify in this environment?
... View more
03-24-2016
04:10 PM
1 Kudo
@Alex Raj
So it appears your calling a Shell action which is expected to produce some output (within file system or hdfs) and you want to see that, is that correct? Or Are you actually wanting to capture the output (echo statements) in the script for the purpose of referencing those values in subsequent steps in the Oozie workflow?
If its the latter, see the response from @Benjamin Leonhardi
If its the former, which I believe you are asking then the answer is (you wont be thrilled) - It depends.
It depends on what the script is doing. I can imagine few scenarios and will talk through that but let us know if you are doing something different in which case, we can talk specific about that. So here is what you MAY be doing in the script - writing to a local file with absolute path writing to a local file with relative path writing to a HDFS file with absolute path Writing to a local file with absolute path -
Lets say the script does this - touch /tmp/a.txt In this case, the output gets created on the local filesystem of nodemanager where the task got executed. There is really no way to tell which one.. so you would have to check all nodes. The good thing is that you know what the absolute path is. Writing to a local file with relative path - Lets say the script does this - touch ./a.txt In this case, the output gets created on the local filesystem of nodemanager, where the task got executed, but relative to the working temp directory where workflow temporary files are created. There is really no way to tell which note and we may never even see the actual file because usually the temporary files are cleaned up after the workflow is executed. SO if the file is within the subdirectory then it will most likely be deleted.
Writing to a HDFS file with absolute path <- This is the best way to setup the program because you know where to look for output. Lets say the script does this -
echo "my content" >> /tmp/a.txt
hdfs dfs -put /tmp/a.txt /tmp/a.txt In this case, the output gets created on HDFS & you know the path. So its easy to find. If you are not following the last approach, I would recommend that. Hope this helps.
... View more
03-24-2016
03:51 PM
@Amit Tewari I see that you are using Chrome, can you check the network console in Dev Tools to see if you are missing any CSS files (404s). Are you using any custom CSS settings in your browser, by any chance?
... View more
02-12-2016
12:10 AM
1 Kudo
Is it possible to have multiple NFS Gateways on different nodes on a single cluster?
... View more
Labels:
02-10-2016
10:10 PM
1 Kudo
That checks out.. I changed (search - replace all) the actual customer realm in the log before pasting here and accidentally left the .com in lower case. I checked the original log file and the realm is correct.
... View more
02-10-2016
05:32 PM
1 Kudo
Running into the following error when enter all KDC and Kadmin details to go to next screen, where the error comes up when 'Testing the KDC client' Any inputs / pointers are appreciated. 10 Feb 2016 10:33:53,818 ERROR [Server Action Executor Worker 1663] CreatePrincipalsServerAction:199 - Failed to create principal, hadoop_dev-021016@EXAMPLE.com - can not check if principal exists: hadoop_dev-021016@EXAMPLE.com
org.apache.ambari.server.serveraction.kerberos.KerberosOperationException: can not check if principal exists: hadoop_dev-021016@EXAMPLE.com
at org.apache.ambari.server.serveraction.kerberos.ADKerberosOperationHandler.principalExists(ADKerberosOperationHandler.java:223)
at org.apache.ambari.server.serveraction.kerberos.CreatePrincipalsServerAction.processIdentity(CreatePrincipalsServerAction.java:155)
at org.apache.ambari.server.serveraction.kerberos.KerberosServerAction.processRecord(KerberosServerAction.java:512)
at org.apache.ambari.server.serveraction.kerberos.KerberosServerAction.processIdentities(KerberosServerAction.java:401)
at org.apache.ambari.server.serveraction.kerberos.CreatePrincipalsServerAction.execute(CreatePrincipalsServerAction.java:79)
at org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.execute(ServerActionExecutor.java:537)
at org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.run(ServerActionExecutor.java:474)
at java.lang.Thread.run(Thread.java:744)
Caused by: javax.naming.LimitExceededException: Referral limit exceeded [Root exception is com.sun.jndi.ldap.LdapReferralException: [LDAP: error code 10 - 0000202B: RefErr: DSID-031007F3, data 0, 1 access points
ref 1: 'example.com'
^@]; remaining name '']; remaining name ''
at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2938)
at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2840)
at com.sun.jndi.ldap.LdapCtx.searchAux(LdapCtx.java:1849)
at com.sun.jndi.ldap.LdapCtx.c_search(LdapCtx.java:1772)
at com.sun.jndi.toolkit.ctx.ComponentDirContext.p_search(ComponentDirContext.java:386)
at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:356)
at com.sun.jndi.ldap.LdapReferralContext.search(LdapReferralContext.java:657)
at com.sun.jndi.ldap.LdapCtx.searchAux(LdapCtx.java:1867)
at com.sun.jndi.ldap.LdapCtx.c_search(LdapCtx.java:1772)
at com.sun.jndi.toolkit.ctx.ComponentDirContext.p_search(ComponentDirContext.java:386)
at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:356)
at com.sun.jndi.ldap.LdapReferralContext.search(LdapReferralContext.java:657)
at com.sun.jndi.ldap.LdapCtx.searchAux(LdapCtx.java:1867)
at com.sun.jndi.ldap.LdapCtx.c_search(LdapCtx.java:1772)
at com.sun.jndi.toolkit.ctx.ComponentDirContext.p_search(ComponentDirContext.java:386)
at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:356)
at com.sun.jndi.ldap.LdapReferralContext.search(LdapReferralContext.java:657)
at com.sun.jndi.ldap.LdapCtx.searchAux(LdapCtx.java:1867)
at com.sun.jndi.ldap.LdapCtx.c_search(LdapCtx.java:1772)
at com.sun.jndi.toolkit.ctx.ComponentDirContext.p_search(ComponentDirContext.java:386)
at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:356)
at com.sun.jndi.ldap.LdapReferralContext.search(LdapReferralContext.java:657)
at com.sun.jndi.ldap.LdapCtx.searchAux(LdapCtx.java:1867)
at com.sun.jndi.ldap.LdapCtx.c_search(LdapCtx.java:1772)
at com.sun.jndi.toolkit.ctx.ComponentDirContext.p_search(ComponentDirContext.java:386)
at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:356)
at com.sun.jndi.ldap.LdapReferralContext.search(LdapReferralContext.java:657)
at com.sun.jndi.ldap.LdapCtx.searchAux(LdapCtx.java:1867)
at com.sun.jndi.ldap.LdapCtx.c_search(LdapCtx.java:1772)
at com.sun.jndi.toolkit.ctx.ComponentDirContext.p_search(ComponentDirContext.java:386)
at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:356)
at com.sun.jndi.ldap.LdapReferralContext.search(LdapReferralContext.java:657)
at com.sun.jndi.ldap.LdapCtx.searchAux(LdapCtx.java:1867)
at com.sun.jndi.ldap.LdapCtx.c_search(LdapCtx.java:1772)
at com.sun.jndi.toolkit.ctx.ComponentDirContext.p_search(ComponentDirContext.java:386)
at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:356)
at com.sun.jndi.ldap.LdapReferralContext.search(LdapReferralContext.java:657)
at com.sun.jndi.ldap.LdapCtx.searchAux(LdapCtx.java:1867)
at com.sun.jndi.ldap.LdapCtx.c_search(LdapCtx.java:1772)
at com.sun.jndi.toolkit.ctx.ComponentDirContext.p_search(ComponentDirContext.java:386)
at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:356)
at com.sun.jndi.ldap.LdapReferralContext.search(LdapReferralContext.java:657)
at com.sun.jndi.ldap.LdapCtx.searchAux(LdapCtx.java:1867)
at com.sun.jndi.ldap.LdapCtx.c_search(LdapCtx.java:1772)
at com.sun.jndi.toolkit.ctx.ComponentDirContext.p_search(ComponentDirContext.java:386)
at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:356)
at com.sun.jndi.ldap.LdapReferralContext.search(LdapReferralContext.java:657)
at com.sun.jndi.ldap.LdapCtx.searchAux(LdapCtx.java:1867)
at com.sun.jndi.ldap.LdapCtx.c_search(LdapCtx.java:1772)
at com.sun.jndi.toolkit.ctx.ComponentDirContext.p_search(ComponentDirContext.java:386)
at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:356)
at com.sun.jndi.ldap.LdapReferralContext.search(LdapReferralContext.java:657)
at com.sun.jndi.ldap.LdapCtx.searchAux(LdapCtx.java:1867)
at com.sun.jndi.ldap.LdapCtx.c_search(LdapCtx.java:1772)
at com.sun.jndi.toolkit.ctx.ComponentDirContext.p_search(ComponentDirContext.java:386)
at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:356)
at javax.naming.directory.InitialDirContext.search(InitialDirContext.java:276)
at org.apache.ambari.server.serveraction.kerberos.ADKerberosOperationHandler.findPrincipalDN(ADKerberosOperationHandler.java:559)
at org.apache.ambari.server.serveraction.kerberos.ADKerberosOperationHandler.principalExists(ADKerberosOperationHandler.java:221)
... 7 more
Caused by: com.sun.jndi.ldap.LdapReferralException: [LDAP: error code 10 - 0000202B: RefErr: DSID-031007F3, data 0, 1 access points
ref 1: 'example.com'
^@]; remaining name ''
at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2927)
... 65 more
10 Feb 2016 10:33:53,822 INFO [Server Action Executor Worker 1663] KerberosServerAction:444 - Processing identities completed.
10 Feb 2016 10:33:54,335 WARN [ambari-action-scheduler] ActionScheduler:317 - Operation completely failed, aborting request id: 199
10 Feb 2016 10:33:54,335 INFO [ambari-action-scheduler] ActionScheduler:699 - Service name is , component name is AMBARI_SERVER_ACTIONskipping sending ServiceComponentHostOpFailedEvent for AMBARI_SERVER_ACTION
10 Feb 2016 10:33:54,339 INFO [ambari-action-scheduler] ActionDBAccessorImpl:186 - Aborting command. Hostname hdpnodemn02.example.com role AMBARI_SERVER_ACTION requestId null taskId 1664 stageId null
10 Feb 2016 10:33:54,339 INFO [ambari-action-scheduler] ActionDBAccessorImpl:186 - Aborting command. Hostname hdpnodedn05.example.com role KERBEROS_CLIENT requestId null taskId 1665 stageId null
10 Feb 2016 10:33:54,340 INFO [ambari-action-scheduler] ActionDBAccessorImpl:186 - Aborting command. Hostname hdpnodedn06.example.com role KERBEROS_CLIENT requestId null taskId 1666 stageId null
10 Feb 2016 10:33:54,340 INFO [ambari-action-scheduler] ActionDBAccessorImpl:186 - Aborting command. Hostname hdpnodedn07.example.com role KERBEROS_CLIENT requestId null taskId 1667 stageId null
10 Feb 2016 10:33:54,340 INFO [ambari-action-scheduler] ActionDBAccessorImpl:186 - Aborting command. Hostname hdpnodedn08.example.com role KERBEROS_CLIENT requestId null taskId 1668 stageId null
10 Feb 2016 10:33:54,340 INFO [ambari-action-scheduler] ActionDBAccessorImpl:186 - Aborting command. Hostname hdpnodemn02.example.com role KERBEROS_CLIENT requestId null taskId 1669 stageId null
10 Feb 2016 10:33:54,340 INFO [ambari-action-scheduler] ActionDBAccessorImpl:186 - Aborting command. Hostname hdpnodesn02.example.com role KERBEROS_CLIENT requestId null taskId 1670 stageId null
10 Feb 2016 10:33:54,341 INFO [ambari-action-scheduler] ActionDBAccessorImpl:186 - Aborting command. Hostname hdpnodemn02.example.com role AMBARI_SERVER_ACTION requestId null taskId 1671 stageId null
10 Feb 2016 10:33:54,341 INFO [ambari-action-scheduler] ActionDBAccessorImpl:186 - Aborting command. Hostname hdpnodedn05.example.com role KERBEROS_SERVICE_CHECK requestId null taskId 1672 stageId null
10 Feb 2016 10:33:54,341 INFO [ambari-action-scheduler] ActionDBAccessorImpl:186 - Aborting command. Hostname hdpnodemn02.example.com role AMBARI_SERVER_ACTION requestId null taskId 1673 stageId null
10 Feb 2016 10:33:54,341 INFO [ambari-action-scheduler] ActionDBAccessorImpl:186 - Aborting command. Hostname hdpnodemn02.example.com role AMBARI_SERVER_ACTION requestId null taskId 1674 stageId null
10 Feb 2016 10:33:54,342 INFO [ambari-action-scheduler] ActionDBAccessorImpl:186 - Aborting command. Hostname hdpnodedn05.example.com role KERBEROS_CLIENT requestId null taskId 1675 stageId null
10 Feb 2016 10:33:54,342 INFO [ambari-action-scheduler] ActionDBAccessorImpl:186 - Aborting command. Hostname hdpnodedn06.example.com role KERBEROS_CLIENT requestId null taskId 1676 stageId null
10 Feb 2016 10:33:54,342 INFO [ambari-action-scheduler] ActionDBAccessorImpl:186 - Aborting command. Hostname hdpnodedn07.example.com role KERBEROS_CLIENT requestId null taskId 1677 stageId null
10 Feb 2016 10:33:54,342 INFO [ambari-action-scheduler] ActionDBAccessorImpl:186 - Aborting command. Hostname hdpnodedn08.example.com role KERBEROS_CLIENT requestId null taskId 1678 stageId null
10 Feb 2016 10:33:54,342 INFO [ambari-action-scheduler] ActionDBAccessorImpl:186 - Aborting command. Hostname hdpnodemn02.example.com role KERBEROS_CLIENT requestId null taskId 1679 stageId null
10 Feb 2016 10:33:54,343 INFO [ambari-action-scheduler] ActionDBAccessorImpl:186 - Aborting command. Hostname hdpnodesn02.example.com role KERBEROS_CLIENT requestId null taskId 1680 stageId null
10 Feb 2016 10:33:54,343 INFO [ambari-action-scheduler] ActionDBAccessorImpl:186 - Aborting command. Hostname hdpnodemn02.example.com role AMBARI_SERVER_ACTION requestId null taskId 1681 stageId null
10 Feb 2016 10:34:10,173 INFO [qtp-client-182831] PersistKeyValueService:82 - Looking for keyName hostPopup-pagination-displayLength-admin
~
... View more
Labels:
02-02-2016
11:50 PM
1 Kudo
@khushi kalra Like Neeraj and Artem pointed out, Apache Atlas is the right tool for managing metadata for Hadoop. Falcon is more for managing the data pipeline and data workflow management which is big part of overall data governance but not metadata. In addition to the links and resources provided, here is a Apache Atlas presentation video by Governance product manager, Andrew Ahn.. https://www.youtube.com/watch?v=LZR4qhKJeSI
... View more
02-02-2016
11:45 PM
3 Kudos
You are correct James. I've checked on this a little bit more..You do need to use the map-column-hive option to map all the columns to right data types.. --map-column-hive <map> Override default mapping from SQL type to Hive type for configured columns.
Alternatively, you can create the table in Hive before running the sqoop job and just run the data import.
... View more
01-22-2016
08:36 PM
@Balu Back to the original error.
... View more