Member since
01-07-2016
89
Posts
20
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
809 | 02-05-2016 02:17 PM | |
250 | 02-05-2016 12:56 AM | |
157 | 01-29-2016 03:24 AM | |
86 | 01-20-2016 03:52 PM | |
74 | 01-20-2016 08:48 AM |
10-11-2016
01:41 PM
Hi, Im trying to get a blueprint from my hdp cluster 2.5 (after upgrade from 2.4 to 2.5) but getting the response below: any idea? thanks! http://10.0.1.1:8080/api/v1/clusters/cluster1?format=blueprint returns:
{
"status" : 400,
"message" : "Invalid Request: Specified configuration type is not associated with any service: hst-common-conf"
}
... View more
08-22-2016
09:18 AM
hi, not really its same issue.... [08S01]: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Current user : adf_admin is not allowed to grant role. User has to belong to ADMIN role and have it as current role, for this action. Otherwise, grantor need to have ADMIN OPTION on role being granted and have it as a current role for this action.
... View more
08-19-2016
03:36 PM
I tried GRANT admin TO USER adf_admin; and got error below [08S01]: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Current user : adf_admin is not allowed to grant role. User has to belong to ADMIN role and have it as current role, for this action. Otherwise, grantor need to have ADMIN OPTION on role being granted and have it as a current role for this action.
... View more
08-19-2016
03:34 PM
hello @Brandon Wilson , im wondering how can i do this? im googling and i cant see anything related to the setting admin privs to the adf_admin user or whatever user. I thought i can do this putting the user name into conf variable " hive.users.in.admin.role "
pls let me know. thank you
... View more
08-19-2016
10:37 AM
hi, im trying to setup SQL Standard-based Authorization based on the https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_dataintegration/content/hive-013-feature-sql-standard-based-grant-revoke.html but apparently it DOESNT work. These are the values what are recommended. -hiveconf hive.metastore.uris
'' (a space inside single quotation marks)
-hiveconf hive.security.authorization.manager org.apache.hadoop.hive.ql.security.
authorization.
MetaStoreAuthzAPIAuthorizerEmbedOnly My ambari hive setup has other values... hive.metastore.uristhrift://blabla.com:9083 and hive.security.authorization.manager
org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdConfOnlyAuthorizerFactory Right now when i try to show roles; for the user which was defined in hive.users.in.admin.role i got the error below: [08S01]: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Current user : adf_admin is not allowed to list roles. User has to belong to ADMIN role and have it as current role, for this action. Thank you
... View more
Labels:
07-28-2016
07:20 AM
So i found appropriate components but it doesnt convert the file properly, any idea? input file is a binary
... View more
07-27-2016
04:11 PM
Hi, where i can find the character set values that are accepted by ConvertCharacterSet processor? Also what component can i use to load CSV file and to dump results into the converted CSV file?
... View more
07-26-2016
07:23 AM
hdp-select
accumulo-client - None
accumulo-gc - None
accumulo-master - None
accumulo-monitor - None
accumulo-tablet - None
accumulo-tracer - None
atlas-server - None
falcon-client - 2.4.2.0-258
falcon-server - 2.4.2.0-258
flume-server - None
hadoop-client - 2.4.2.0-258
hadoop-hdfs-datanode - 2.4.2.0-258
hadoop-hdfs-journalnode - 2.4.2.0-258
hadoop-hdfs-namenode - 2.4.2.0-258
hadoop-hdfs-nfs3 - 2.4.2.0-258
hadoop-hdfs-portmap - 2.4.2.0-258
hadoop-hdfs-secondarynamenode - 2.4.2.0-258
hadoop-httpfs - None
hadoop-mapreduce-historyserver - 2.4.2.0-258
hadoop-yarn-nodemanager - 2.4.2.0-258
hadoop-yarn-resourcemanager - 2.4.2.0-258
hadoop-yarn-timelineserver - 2.4.2.0-258
hbase-client - 2.4.2.0-258
hbase-master - 2.4.2.0-258
hbase-regionserver - 2.4.2.0-258
hive-metastore - 2.4.2.0-258
hive-server2 - 2.4.2.0-258
hive-webhcat - 2.4.2.0-258
kafka-broker - None
knox-server - None
livy-server - None
mahout-client - None
oozie-client - 2.4.2.0-258
oozie-server - 2.4.2.0-258
phoenix-client - None
phoenix-server - None
ranger-admin - None
ranger-kms - None
ranger-usersync - None
slider-client - None
spark-client - 2.4.2.0-258
spark-historyserver - 2.4.2.0-258
spark-thriftserver - 2.4.2.0-258
sqoop-client - None
sqoop-server - None
storm-client - None
storm-nimbus - None
storm-slider-client - None
storm-supervisor - None
zeppelin-server - None
zookeeper-client - 2.4.2.0-258
zookeeper-server - 2.4.2.0-258
... View more
07-25-2016
02:19 PM
Please find the results: root@ip-1:/# ambari-server set-current --cluster-name=testcluster6 --version-display-name=HDP-2.4.2.0-258 Using python /usr/bin/python
Setting current version...
Enter Ambari Admin login: admin
Enter Ambari Admin password:
ERROR: Exiting with exit code 1.
REASON: Error during setting current version. Http status code - 500.
{
"status" : 500,
"message" : "org.apache.ambari.server.controller.spi.SystemException: Finalization failed. More details: \nSTDOUT: Begin finalizing the upgrade of cluster testcluster6 to version 2.4.2.0-258\n\nSTDERR: The cluster stack version state CURRENT is not allowed to transition directly into CURRENT"
}
root@ip-1:/#
... View more
07-25-2016
10:10 AM
Dear, Im running HDP 2.4 default install for a few weeks, all worked fine and I was able to restart services via ambari. Last week I tried to restart the service (changes of the configuration parameters) and I got error listed below. Error
500 status code received on POST method for API: /api/v1/clusters/spice6/requests <strong>Error message: </strong>Server Error Please find attached the full log from the ambari-server,ambari-error-log.txt Thank you
... View more
02-09-2016
01:37 PM
its strange you cant reproduce error, does it work for you?
Application application_1454923438220_0007 failed 2
times due to AM Container for appattempt_1454923438220_0007_000002
exited with exitCode: 1
For more detailed output, check application tracking
page:http://sandbox.hortonworks.com:8088/cluster/app/application_1454923438220_0007Then,
click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_e10_1454923438220_0007_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:576)
at org.apache.hadoop.util.Shell.run(Shell.java:487)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753)
at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
... View more
02-05-2016
04:01 PM
1 Kudo
well CSVExcelStorage doesnt work also....
2016-02-05 16:01:28,917 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Failed!
2016-02-05 16:01:29,745 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1066: Unable to open iterator for alias sourceData
Details at logfile: /home/hdfs/pig_1454687855333.log
grunt> Im confused... what is it.
... View more
02-05-2016
02:45 PM
1 Kudo
for 100% there is no problem with input dataset, i kept only first 5 records in file and its the same issue.
... View more
02-05-2016
02:18 PM
you should fix that FORUM website its pain to format text, paste code etc....
... View more
02-05-2016
02:17 PM
2 Kudos
I created sample code, it works FINE. BufferedInputStream inStream = null;
String inputF = "hdfs://CustomerData-20160128-1501807.avro";
org.apache.hadoop.fs.Path inPath = new org.apache.hadoop.fs.Path(inputF);
try {
Configuration conf = new Configuration();
conf.set("fs.defaultFS", "hdfs://sandbox.hortonworks.com:8020");
FileSystem fs = FileSystem.get(URI.create(inputF), conf);
inStream = new BufferedInputStream(fs.open(inPath));
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
DataFileStream reader = new DataFileStream(inStream, new GenericDatumReader());
Schema schema = reader.getSchema();
System.out.println(schema.toString());
... View more
02-05-2016
02:06 PM
im trying to write sample java code... but https://hadoop.apache.org/docs/r2.6.1/api/org/apache/hadoop/conf/Configuration.html [root@sandbox deploy-4]# find / -name core-default.xml
[root@sandbox deploy-4]# find / -name core-site..xml there are no such a files in sandbox. How can i go thru this step? thanks
... View more
02-05-2016
01:08 PM
can you call avro-tools-1.7.4.jar within the pig script? and also is it possible to access files stored on HDFS using avro-tools?
... View more
02-05-2016
11:36 AM
1 Kudo
Hi, I want to read a metadata from avro file stored in HDFS using AVRO api ( https://avro.apache.org/docs/1.4.1/api/java/org/apache/avro/file/DataFileReader.html ) The avro DataFileReader accepts only File objects. Is it somehow
possible to read data from file stored on hdfs instead of data stored on
local fs? Thank you
... View more
Labels:
02-05-2016
09:03 AM
1 Kudo
this is odd: when i do grunt> b = limit sourceData 5; grunt>dump b; i works for me also, when i dont limit result set .. .and just executing dump sourceData; im occurring same error.
... View more
02-05-2016
08:43 AM
1 Kudo
then what kind of issue with environment it could be? I only executed menitoned command, nothing else.
... View more
02-05-2016
12:59 AM
you can find dataset here: https://drive.google.com/file/d/0B6RZ_9vVuTEcTHllU1dIR2VBY1E/view?usp=sharing \\thank you
... View more
02-05-2016
12:56 AM
1 Kudo
fyi https://issues.apache.org/jira/browse/PIG-4793 org.apache.pig.piggybank.storage.avro.AvroStorage is Deprecated, use AvroStorage('schema', '-d') This works.
... View more
02-05-2016
12:39 AM
1 Kudo
needles to say, this is insane. Yes, grunt by -x mapreduce, i tried -x tez but: 2016-02-05 00:37:42,172 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1066: Unable to open iterator for alias sourceDataDetails at logfile: /home/hdfs/pig_1454632554431.log privileges are correct:
drwxr-xr-x - hdfs hdfs 0 2016-02-04 23:55 /src
delimiter is is ; any idea?
... View more
02-05-2016
12:13 AM
1 Kudo
Hi, I am trying to execute pig script in mapreduce mode, script is simple: grunt> sourceData = load 'hdfs://sandbox.hortonworks.com:8020/src/CustomerData.csv' using PigStorage(';') as (nullname: chararray,customerId: chararray,VIN: chararray,Birthdate: chararray,Mileage: chararray,Fuel_Consumption: chararray); File is stored in HDFS: hadoop fs -ls hdfs://sandbox.hortonworks.com:8020/src/CustomerData.csv
-rw-r--r-- 3 hdfs hdfs 6828 2016-02-04 23:55 hdfs://sandbox.hortonworks.com:8020/src/CustomerData.csv Error that i got: Failed Jobs:
JobId Alias Feature Message Outputs
job_1454609613558_0003 sourceData MAP_ONLY Message: Job failed! hdfs://sandbox.hortonworks.com:8020/tmp/temp-710368608/tmp-1611282262, Input(s):
Failed to read data from "hdfs://sandbox.hortonworks.com:8020/src/CustomerData.csv" Output(s):
Failed to produce result in "hdfs://sandbox.hortonworks.com:8020/tmp/temp-710368608/tmp-1611282262" Pig Stack Trace---------------ERROR 1066: Unable to open iterator for alias sourceDataorg.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable to open iterator for alias sourceData at org.apache.pig.PigServer.openIterator(PigServer.java:935) at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:754) at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:376) at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:230) at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205) at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:66) at org.apache.pig.Main.run(Main.java:565) at org.apache.pig.Main.main(Main.java:177) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136)Caused by: java.io.IOException: Job terminated with anomalous status FAILED at org.apache.pig.PigServer.openIterator(PigServer.java:927) ... 13 more
... View more
- Tags:
- Data Processing
- Pig
Labels:
02-03-2016
01:16 PM
but if you have a source file which is 100GB , and memory of the machine is only 32GB would it work? In case that mapreduce is used and more nodes are processing the file ... is there any benefit of parallelism? script is simple: x = load 'file.txt' using PigStorage(,); x = rank x; store x using ....
... View more
02-03-2016
01:01 PM
But is hadoop / mapreduce able to split the source .txt file and each node will process only specific part of it?
... View more
02-03-2016
08:56 AM
1 Kudo
I use simple pig script that reads the input .txt file and for each line new field (row number) is added. The output relation is then stored into avro. Is there any benefit to run such a script in the mapreduce mode compare to local mode? Thank you
... View more
Labels:
02-01-2016
12:27 PM
ah i already did ... my question was why its there ... when i use local mode its not there .. anyway there is no reply from anyone behind avrostorage... thats pretty odd.
... View more