Member since
05-10-2016
303
Posts
35
Kudos Received
0
Solutions
06-29-2016
12:48 PM
JA009: org.apache.hive.hcatalog.common.HCatException : 9001 : Exception
occurred while processing HCat request : MetaException while getting
delegation token.. Cause : MetaException(message:Unauthorized connection
for super-user: oozie/master003.next.rec.mapreduce.m1.p.fti.net@FTI.NET
from IP 10.98.138.87) @Rahul Pathak and @peeyush : I have this error on target. it's missing some proxyuser ?
... View more
06-29-2016
12:07 PM
@Rahul Pathak and @peeyush thanks !! but why falcon check this order or write this information in falcon documentation.
... View more
06-29-2016
10:42 AM
Hi all, I'm follow my tuto : https://community.hortonworks.com/articles/38285/falcon-hive-integration.html and checked : https://falcon.apache.org/HiveIntegration.html I don't know what argument is expected in this feed : [falcon@master003 HIVE]$ more replication-feed.xml
<?xml version="1.0" encoding="UTF-8"?>
<feed description="Monthly Analytics Summary" name="replication-feed" xmlns="uri:falcon:feed:0.1">
<tags>EntityType=Feed</tags>
<frequency>months(1)</frequency>
<clusters>
<cluster name="c-source-current" type="source">
<validity start="2016-06-20T00:00Z" end="2016-06-30T00:00Z"/>
<retention limit="months(36)" action="delete"/>
<table uri="catalog:falcon_landing_db:summary_table#ds=${YEAR}-${MONTH}"/>
</cluster>
<cluster name="c-target-next" type="target">
<validity start="2016-06-20T00:00Z" end="2016-06-30T00:00Z"/>
<retention limit="months(180)" action="delete"/>
<table uri="catalog:falcon_archive_db:falcon_summary_archive_table#ds=${YEAR}-${MONTH}"/>
</cluster>
</clusters>
<table uri="catalog:falcon_landing_db:summary_table#ds=${YEAR}-${MONTH}" />
<schema location="hcat" provider="hcat"/>
<ACL owner="falcon" group="hadoop" permission="0755"/>
</feed>
[falcon@master003 HIVE]$ falcon entity -type feed -submit -file replication-feed.xml
log4j:WARN No appenders could be found for logger (org.apache.hadoop.security.authentication.client.KerberosAuthenticator).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
ERROR: Bad Request;javax.xml.bind.UnmarshalException
- with linked exception:
[org.xml.sax.SAXParseException; lineNumber: 23; columnNumber: 46; cvc-complex-type.2.4.a: Invalid content was found starting with element 'schema'. One of '{"uri:falcon:feed:0.1":notification, "uri:falcon:feed:0.1":ACL}' is expected.]
[falcon@master003 HIVE]$ https://community.hortonworks.com/articles/38285/falcon-hive-integration.html
i'm also try with this parameter but same error: <schema location="" provider="hcatalog"/>
... View more
Labels:
- Labels:
-
Apache Falcon
-
Apache Hive
06-28-2016
02:52 PM
dfs.data.dirs is not define in my cluster so where non-HDFS data are stored ?
... View more
06-28-2016
02:38 PM
@slachterman : i've read this formula, and I know this property But i want to know what exactly occupied this space ?
... View more
06-28-2016
02:26 PM
Hi all, what is the "Non DFS Used": it is some directories, files ?? hdfs dfsadmin -report # for one datanode
Rack: /SMTS/BC21
Decommission Status : Normal
Configured Capacity: 35974718423040 (32.72 TB)
DFS Used: 9748915679350 (8.87 TB)
Non DFS Used: 12141363006 (11.31 GB)
DFS Remaining: 26213661380684 (23.84 TB)
DFS Used%: 27.10%
DFS Remaining%: 72.87%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 16
df -k
/dev/sdb1 2928678656 798914704 2129763952 28% /var/opt/hosting/data/disk1
/dev/sdc1 2928678656 785083044 2143595612 27% /var/opt/hosting/data/disk2
/dev/sdd1 2928678656 798313760 2130364896 28% /var/opt/hosting/data/disk3
/dev/sde1 2928678656 799300600 2129378056 28% /var/opt/hosting/data/disk4
/dev/sdf1 2928678656 786169864 2142508792 27% /var/opt/hosting/data/disk5
/dev/sdg1 2928678656 799986864 2128691792 28% /var/opt/hosting/data/disk6
/dev/sdh1 2928678656 804181044 2124497612 28% /var/opt/hosting/data/disk7
/dev/sdi1 2928678656 789982192 2138696464 27% /var/opt/hosting/data/disk8
/dev/sdj1 2928678656 795951544 2132727112 28% /var/opt/hosting/data/disk9
/dev/sdk1 2928678656 789924668 2138753988 27% /var/opt/hosting/data/disk10
/dev/sdl1 2928678656 801960132 2126718524 28% /var/opt/hosting/data/disk11
/dev/sdm1 2928678656 782037100 2146641556 27% /var/opt/hosting/data/disk12
... View more
Labels:
- Labels:
-
Apache Hadoop
06-03-2016
09:44 AM
Likes distcp, i have the same issue with FALCON for submitting cluster entity. My falcon server needs to know local clusterID and remote clusterID. And falcon command not has an option for config file hdfs-site.xml.
... View more
05-30-2016
02:18 PM
Great Thanks.
... View more
05-30-2016
12:46 PM
Hi, If I understand we can start multiple nfs gateway server on multiple servers (datanode, namenode, client hdfs). if we have (servernfs01, servernfs02, servernfs03) and (client01, client02) client01# : mount -t nfs servernfs01:/ /test01
client02# : mount -t nfs servernfs02:/ /test02 My question is how to avoir a service interruption ? What's happened if servernfs01 is failed ? How to keep access to hdfs for client01, in this case ?
... View more