Member since
09-18-2015
3274
Posts
1158
Kudos Received
425
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1513 | 11-01-2016 05:43 PM | |
3503 | 11-01-2016 05:36 PM | |
2957 | 07-01-2016 03:20 PM | |
4941 | 05-25-2016 11:36 AM | |
2151 | 05-24-2016 05:27 PM |
11-01-2016
05:43 PM
1 Kudo
@Sagar Shimpi Is it's FIPS 140-2 compliant ? I believe it's no afaik Ranger KMS is based on Hadoop KMS https://hadoop.apache.org/docs/stable/hadoop-kms/index.html
... View more
11-01-2016
05:36 PM
@Adi Jabkowsky Take a look on the error logs... ERROR [HiveServer2-Background-Pool: Thread-165]: authorizer.RangerHiveAuthorizer (RangerHiveAuthorizer.java:filterListCmdObjects(430)) - filterListCmdObjects: Internal error: null RangerAccessResult object received back from isAccessAllowed()!
... View more
11-01-2016
05:31 PM
Demo
Extract data from images and store in HDFS. Documents with size less than 10mb stores into HBase.
Document > 10mb lands into HDFS with metadata into HBase
Part 1 - https://www.linkedin.com/pulse/cds-content-data-store-nosql-part-1-co-dev-neeraj-sabharwal
... View more
- Find more articles tagged with:
- Data Processing
- HBase
- How-ToTutorial
- image-extract
Labels:
11-01-2016
05:30 PM
3 Kudos
@bigdata.neophyte Try this [hdfs@ip-172-31-40-160 ~]$ cat b.sh beeline << EOF !connect jdbc:hive2://localhost:10000 hive hive show tables EOF [hdfs@ip-172-31-40-160 ~]$ sh -x b.sh + beeline Beeline version 1.2.1000.2.5.0.0-1245 by Apache Hive beeline> !connect jdbc:hive2://localhost:10000 hive hive Connecting to jdbc:hive2://localhost:10000 Connected to: Apache Hive (version 1.2.1000.2.5.0.0-1245) Driver: Hive JDBC (version 1.2.1000.2.5.0.0-1245) Transaction isolation: TRANSACTION_REPEATABLE_READ 0: jdbc:hive2://localhost:10000> show tables; +-------------------+--+ | tab_name | +-------------------+--+ | edl_good_data | | td_edl_good_data | | test | +-------------------+--+ 3 rows selected (0.152 seconds) 0: jdbc:hive2://localhost:10000> Closing: 0: jdbc:hive2://localhost:10000 [hdfs@ip-172-31-40-160 ~]$
... View more
09-06-2016
02:05 AM
@Kirk Haslbeck and @David Kaiser I have accepted this as best answer.
... View more
07-16-2016
12:05 AM
2 Kudos
OpenHAB - Build your smart home in no time!
Welcome to http://www.openhab.org/
A vendor and technology agnostic open source automation software for your home.
OpenHAB is a mature, open source home automation platform that runs on a variety of hardware and is protocol agnostic, meaning it can connect to nearly any home automation hardware on the market today. If you’ve been frustrated with the number of manufacturer specific apps you need to run just to control your lights, then I’ve got great news for you: OpenHAB is the solution you’ve been looking for – it’s the most flexible smart home hub you’ll ever find. Source
Demo:
Go to http://www.openhab.org/getting-started/downloads.html
Download Runtime core and Demo files
Extract Runtime core files in a directory called openHAB and extract Demo files under OpenHAB. See the following:
Now, download smartphone app called openHAB in your smartphone. I am using iOS and once you launch the app then disable DEMO tab and enter the https://192.x.x.x IP:8443 in your local domain as shown below.
You will be controlling the Room settings from your phone while openHAB is running in your machine or raspberry pi.
For now, just for fun, I am running this in my mac and playing on my iOS.
Docs and Examples
If you want to test it like a "pro" then follow this example
... View more
- Find more articles tagged with:
- Data Ingestion & Streaming
- FAQ
- IoT
07-16-2016
12:03 AM
4 Kudos
INTRODUCING THE HORTONWORKS CONNECTED DATA CLOUD TECHNICAL PREVIEW
To this end, we are introducing the “
Hortonworks Connected Data Cloud” Technical Preview. This Technical Preview gives you a way to quickly spin up Apache Hive and Apache Spark clusters that are ready to run ephemeral workloads in your Amazon Web Services (AWS) environment.Source
Step 1
Follow the video in this
http://hortonworks.github.io/hdp-aws/launch/
Step 2
Create and Terminate a cluster.
Ambari Services starting
Install Completed
GUI access different views
... View more
- Find more articles tagged with:
- aws
- Cloud & Operations
- hdp-aws
- How-ToTutorial
07-07-2016
10:13 PM
@nallen I have pasted the platform.sh output in my previous comment. xav-us-lap1732:scripts root# ./platform-info.sh Metron 0.2.0BETA -- * master -- commit d3257a79d0a853ed511c1231ab79a2bfcc48603f Author: nickwallen <nick@nickallen.org> Date: Mon Jun 27 15:34:20 2016 -0700 METRON-259 Using 'any' for Snort's HOME_NETWORK (nickwallen) closes apache/incubator-metron#176 -- metron-deployment/roles/metron_elasticsearch_templates/tasks/load_templates.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- ansible 2.0.0.2 config file = configured module search path = Default w/o overrides --
... View more
07-07-2016
03:37 PM
xav-us-lap1732:scripts root# ./platform-info.sh Metron 0.2.0BETA -- * master -- commit d3257a79d0a853ed511c1231ab79a2bfcc48603f Author: nickwallen <nick@nickallen.org> Date: Mon Jun 27 15:34:20 2016 -0700 METRON-259 Using 'any' for Snort's HOME_NETWORK (nickwallen) closes apache/incubator-metron#176 -- metron-deployment/roles/metron_elasticsearch_templates/tasks/load_templates.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- ansible 2.0.0.2 config file = configured module search path = Default w/o overrides -- Vagrant 1.8.4 -- Python 2.7.10 -- Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T11:41:47-05:00) Maven home: /usr/local/Cellar/maven/3.3.9/libexec Java version: 1.8.0_92, vendor: Oracle Corporation Java home: /Library/Java/JavaVirtualMachines/jdk1.8.0_92.jdk/Contents/Home/jre Default locale: en_US, platform encoding: UTF-8 OS name: "mac os x", version: "10.11.5", arch: "x86_64", family: "mac" -- Darwin xav-us-lap1732.local 15.5.0 Darwin Kernel Version 15.5.0: Tue Apr 19 18:36:36 PDT 2016; root:xnu-3248.50.21~8/RELEASE_X86_64 x86_64 xav-us-lap1732:scripts root#
... View more
07-07-2016
01:44 PM
@Neha Sinha Yes 1467899208 13:46:48 metron green 1 1 0 0 0 0 0 0 - 100.0%
... View more
07-07-2016
11:58 AM
1 Kudo
ansible 2.0.0.2 TASK [elasticsearch : Configure Elasticsearch] ********************************* changed: [node1] => (item={u'regexp': u'^# *cluster\\.name:', u'line': u'cluster.name: metron'}) changed: [node1] => (item={u'regexp': u'^# *network\\.host:', u'line': u'network.host: _eth1:ipv4_'}) changed: [node1] => (item={u'regexp': u'^# *discovery\\.zen\\.ping\\.unicast\\.hosts:', u'line': u'discovery.zen.ping.unicast.hosts: [ node1 ]'}) changed: [node1] => (item={u'regexp': u'^# *path\\.data', u'line': u'path.data: /data1/elasticsearch,/data2/elasticsearch'}) TASK [elasticsearch : Create Logrotate Script for Elasticsearch] *************** ok: [node1] TASK [metron_elasticsearch_templates : include] ******************************** included: /private/var/root/incubator-metron/metron-deployment/roles/metron_elasticsearch_templates/tasks/load_templates.yml for node1 TASK [metron_elasticsearch_templates : Start Elasticsearch] ******************** ok: [node1] TASK [metron_elasticsearch_templates : Wait for Elasticsearch Host to Start] *** ok: [node1] TASK [metron_elasticsearch_templates : Wait for Index to Become Available] ***** fatal: [node1]: FAILED! => {"failed": true, "msg": "ERROR! The conditional check 'result.content.find(\"green\") != -1 or result.content.find(\"yellow\") != -1' failed. The error was: ERROR! error while evaluating conditional (result.content.find(\"green\") != -1 or result.content.find(\"yellow\") != -1): ERROR! 'dict object' has no attribute 'content'"} PLAY RECAP ********************************************************************* node1 : ok=66changed=9 unreachable=0 failed=1 Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. xav-us-lap1732:full-dev-platform root#
... View more
- Tags:
- CyberSecurity
- Metron
Labels:
- Labels:
-
Apache Metron
07-07-2016
11:01 AM
@Neha Sinha I executed this pip install --upgrade setuptools --user python to fix the above error TASK [ambari_config : check if ambari-server is up on node1:8080] ************** fatal: [node1]: FAILED! => {"changed": false, "elapsed": 300, "failed": true, "msg": "Timeout when waiting for node1:8080"} PLAY RECAP ********************************************************************* node1 : ok=11changed=5 unreachable=0 failed=1 Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again.
... View more
07-07-2016
10:52 AM
@Neha Sinha I did follow that..getting this ==> node1: Updating /etc/hosts file on host machine (password may be required)... ==> node1: Running provisioner: ansible... node1: Running ansible-playbook... Unexpected Exception: (setuptools 1.1.6 (/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python), Requirement.parse('setuptools>=11.3')) the full traceback was: Traceback (most recent call last): File "/usr/local/bin/ansible-playbook", line 72, in <module> mycli = getattr(__import__("ansible.cli.%s" % sub, fromlist=[myclass]), myclass) File "/Library/Python/2.7/site-packages/ansible/cli/playbook.py", line 30, in <module> from ansible.executor.playbook_executor import PlaybookExecutor File "/Library/Python/2.7/site-packages/ansible/executor/playbook_executor.py", line 30, in <module> from ansible.executor.task_queue_manager import TaskQueueManager File "/Library/Python/2.7/site-packages/ansible/executor/task_queue_manager.py", line 29, in <module> from ansible.executor.play_iterator import PlayIterator File "/Library/Python/2.7/site-packages/ansible/executor/play_iterator.py", line 29, in <module> from ansible.playbook.block import Block File "/Library/Python/2.7/site-pa
... View more
07-06-2016
08:27 PM
1 Kudo
TASK [ambari_config : Install python-requests] ********************************* ok: [node1] TASK [ambari_config : check if ambari-server is up on node1:8080] ************** ok: [node1] TASK [ambari_config : Deploy cluster with Ambari; http://node1:8080] *********** fatal: [node1]: FAILED! => {"changed": false, "failed": true, "msg": "value of wait_for_complete must be one of: yes,on,1,true,1,True,no,off,0,false,0,False, got: True"} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @../../playbooks/metron_full_install.retry PLAY RECAP ********************************************************************* node1 : ok=36changed=5 unreachable=0 failed=1 Ansible failed to complete successfully. Any error output should be visible above. Please fix these errors and try again. I am following https://github.com/apache/incubator-metron/tree/master/metron-deployment/vagrant/full-dev-platform and hitting the above while provisioning.
... View more
- Tags:
- CyberSecurity
- Metron
Labels:
- Labels:
-
Apache Metron
07-05-2016
05:04 PM
1 Kudo
@pradeep arumalla See this solution mentioned in this link. You may have to play around with the different number to find the optimal setting.
... View more
07-05-2016
03:00 PM
Chronos is a replacement for cron.
A fault tolerant job scheduler for Mesos which handles dependencies and ISO8601 based schedules
Marathon is a framework for Mesos that is designed to launch long-running applications, and, in Mesosphere, serves as a replacement for a traditional system
In Mesosphere, Chronos compliments Marathon as it provides another way to run applications, according to a schedule or other conditions, such as the completion of another job. It is also capable of scheduling jobs on multiple Mesos slave nodes, and provides statistics about job failures and successes. Source
Install https://mesos.github.io/chronos/docs/ and gist
Demo
Part 1 - https://www.linkedin.com/pulse/data-center-operating-system-dcos-part-1-neeraj-sabharwal Part 2 - https://www.linkedin.com/pulse/apache-marathon-part-2-neeraj-sabharwal
... View more
07-05-2016
11:30 AM
1 Kudo
You need Mesos to run this - Post 1
What is Apache Marathon?
Marathon is a production-grade container orchestration platform for Mesosphere'sDatacenter Operating System (DCOS) and Apache Mesos.
I am launching multiple applications using Marathon and Mesos is providing the framework to launch those applications.
Demo
More reading https://mesosphere.github.io/marathon/ Gist & Application example
... View more
07-05-2016
04:03 AM
1 Kudo
DC/OS - a new kind of operating system that spans all of the servers in a physical or cloud-based datacenter, and runs on top of any Linux distribution.
Source
Projects
More details https://docs.mesosphere.com/overview/components/
Let's cover Mesos in this post
Frameworks (Application running on mesos) http://mesos.apache.org/documentation/latest/frameworks/
I used http://mesos.apache.org/gettingstarted/ to install Mesos in my local machine. I am launching c++, java and python framework in this demo.
Mesos demo
More reading
... View more
07-04-2016
01:51 PM
4 Kudos
Hive: Apache Hive is a data warehouse infrastructure built on top of Hadoop for providing data summarization, query, and analysis.
HBase: Apache HBase™ is the Hadoop database, a distributed, scalable, big data store
Hawq: http://hawq.incubator.apache.org/
PXF: PXF is an extensible framework that allows HAWQ to query external system data
Let's learn Query federation
This topic describes how to access Hive data using PXF.
Link
Previously, in order to query Hive tables using HAWQ and PXF, you needed to create an external table in PXF that described the target table's Hive metadata. Since HAWQ is now integrated with HCatalog, HAWQ can use metadata stored in HCatalog instead of external tables created for PXF. HCatalog is built on top of the Hive metastore and incorporates Hive's DDL. This provides several advantages:
You do not need to know the table schema of your Hive tables You do not need to manually enter information about Hive table location or format If Hive table metadata changes, HCatalog provides updated metadata. This is in contrast to the use of static external PXF tables to define Hive table metadata for HAWQ.
HAWQ retrieves table metadata from HCatalog using PXF. HAWQ creates in-memory catalog tables from the retrieved metadata. If a table is referenced multiple times in a transaction, HAWQ uses its in-memory metadata to reduce external calls to HCatalog. PXF queries Hive using table metadata that is stored in the HAWQ in-memory catalog tables. Table metadata is dropped at the end of the transaction.
Demo
Tools used
Hive,Hawq,Zeppelin
HBase tables
Follow
this to create hbase tables
perl create_hbase_tables.pl
Create table in HAWQ to access HBASE table
Note:
Port is 51200 not 50070
Links
Gist
PXF docs
Must see
this
Zeppelin interpreter settings
... View more
- Find more articles tagged with:
- Data Processing
- hawq
- Hive
- How-ToTutorial
- pivotal
- pxf
Labels:
07-04-2016
01:45 PM
2 Kudos
I found this
article "For mobile analytics, Yahoo is in the process of replacing HBase with Druid"
History : 24th Oct, 2012
To test out the setup, I have deployed druid in 2 clusters. first deployment is in my multi node cluster and 2nd deployment is using
this repo.
Details are on this blog
Demo - PS: It's 10 minutes demo
We are loading pageviews and then executing queries. See links at the bottom to download the git and code.
Gif
Download
I use
this to control gif movement
Links:
Page view queries and data
Spin up the environment in your mac or windows "not sure about windows"
Git link . This will spin up Druid, ZK, Hadoop, Postgres
Gist
Happy Hadooping!!!!
... View more
- Find more articles tagged with:
- Data Ingestion & Streaming
- druid
- FAQ
Labels:
07-03-2016
12:23 AM
6 Kudos
"Druid is fast column-oriented distributed data store". Druid is an open source data store designed for OLAP queries on event data. Architecture
Historical nodes are the workhorses that handle storage and querying on "historical" data (non-realtime). Historical nodes download segments from deep storage, respond to the queries from broker nodes about these segments, and return results to the broker nodes. They announce themselves and the segments they are serving in Zookeeper, and also use Zookeeper to monitor for signals to load or drop new segments. Coordinator nodes monitor the grouping of historical nodes to ensure that data is available, replicated and in a generally "optimal" configuration. They do this by reading segment metadata information from metadata storage to determine what segments should be loaded in the cluster, using Zookeeper to determine what Historical nodes exist, and creating Zookeeper entries to tell Historical nodes to load and drop new segments. Broker nodes receive queries from external clients and forward those queries toRealtime and Historical nodes. When Broker nodes receive results, they merge these results and return them to the caller. For knowing topology, Broker nodes use Zookeeper to determine what Realtime and Historical nodes exist. Indexing Service nodes form a cluster of workers to load batch and real-time data into the system as well as allow for alterations to the data stored in the system. Realtime nodes also load real-time data into the system. They are simpler to set up than the indexing service, at the cost of several limitations for production use. Segments are stored in deep storage. You can use S3, HDFS or local mount. Queries are going from client to broker to Realtime or Historical nodes. LAMBDA Architecture Dependencies Indexing service - Source ZK, Storage and Metadata
A running ZooKeeper cluster for cluster service discovery and maintenance of current data topology A metadata storage instance for maintenance of metadata about the data segments that should be served by the system A "deep storage" LOB store/file system to hold the stored segments Source Part 2 - Demo Druid and HDFS as deep storage.
... View more
- Find more articles tagged with:
- Data Processing
- druid
- How-ToTutorial
Labels:
07-01-2016
03:24 PM
1 Kudo
@roy p See this if it helps. Link
... View more
07-01-2016
03:20 PM
1 Kudo
@ed day You can manage this by maintaining /etc/hosts but whenever IP changes then you have to update the entries in the host file. FQDN is recommended for reasons like changing IP in the environment does not require any changes in the cluster or users does not need to have local /etc/hosts in their env to reach the cluster.
... View more
05-25-2016
11:36 AM
I was able to fix the above issue by adding hadoop jars in the class path while starting the components Start Coordinator, Overlord ns03
java `cat conf/druid/coordinator/jvm.config | xargs` -cp conf/druid/_common:conf/druid/coordinator:lib/*:/usr/hdp/2.4.2.0-258/hadoop/lib/*:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/*:/usr/hdp/2.4.2.0-258/hadoop-yarn/*:/usr/hdp/2.4.2.0-258/hadoop/client/*:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/*:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/lib/* io.druid.cli.Main server coordinator &
java `cat conf/druid/overlord/jvm.config | xargs` -cp conf/druid/_common:conf/druid/overlord:lib/*:/usr/hdp/2.4.2.0-258/hadoop/lib/*:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/*:/usr/hdp/2.4.2.0-258/hadoop-yarn/*:/usr/hdp/2.4.2.0-258/hadoop/client/*:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/*:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/lib/* io.druid.cli.Main server overlord &
Start Historicals and MiddleManagers ns02
java `cat conf/druid/historical/jvm.config | xargs` -cp conf/druid/_common:conf/druid/historical:lib/*:/usr/hdp/2.4.2.0-258/hadoop/lib/*:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/*:/usr/hdp/2.4.2.0-258/hadoop-yarn/*:/usr/hdp/2.4.2.0-258/hadoop/client/*:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/*:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/lib/* io.druid.cli.Main server historical &
java `cat conf/druid/middleManager/jvm.config | xargs` -cp conf/druid/_common:conf/druid/middleManager:lib/*:/usr/hdp/2.4.2.0-258/hadoop/lib/*:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/*:/usr/hdp/2.4.2.0-258/hadoop-yarn/*:/usr/hdp/2.4.2.0-258/hadoop/client/*:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/*:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/lib/* io.druid.cli.Main server middleManager &
Start Druid Broker
java `cat conf/druid/broker/jvm.config | xargs` -cp conf/druid/_common:conf/druid/broker:lib/*:/usr/hdp/2.4.2.0-258/hadoop/lib/*:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/*:/usr/hdp/2.4.2.0-258/hadoop-yarn/*:/usr/hdp/2.4.2.0-258/hadoop/client/*:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/*:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/lib/* io.druid.cli.Main server broker &
... View more
05-24-2016
05:27 PM
1 Kudo
@karthik sai Make this your landing point http://docs.hortonworks.com/HDPDocuments/HDF1/HDF-1.2.0.1/index.html Release notes http://docs.hortonworks.com/HDPDocuments/HDF1/HDF-1.2.0.1/bk_HDF_RelNotes/content/ch_hdf_relnotes.html#release_summary Supported Operating Systems
Red Hat Enterprise Linux / CentOS 6 (64-bit) Red Hat Enterprise Linux / CentOS 7 (64-bit) Ubuntu Precise (12.04) (64-bit) Ubuntu Trusty (14.04) (64-bit) Debian 6 Debian 7 SUSE Enterprise Linux 11 - SP3 (64-bit) 2) http://docs.hortonworks.com/HDPDocuments/HDF1/HDF-1.2.0.1/bk_HDF_InstallSetup/content/hdf_supported_hdp.html 3) HDF 1.2 Hardware recommendation https://docs.hortonworks.com/HDPDocuments/HDF1/HDF-1.2/bk_HDF_InstallSetup/content/hdf_isg_hardware.html Demo idea link https://community.hortonworks.com/articles/961/a-collection-of-nifi-examples.html
... View more
05-24-2016
04:13 PM
1 Kudo
@Alex Raj Hive tuning http://hortonworks.com/blog/5-ways-make-hive-queries-run-faster/ SparkSql - http://hortonworks.com/hadoop-tutorial/using-hive-with-orc-from-apache-spark/
... View more
05-24-2016
04:07 PM
1 Kudo
HDP 2.4.2 Ambari 2.2.2 druid-0.9.0 I am following this http://druid.io/docs/latest/tutorials/quickstart.html and running [root@nss03 druid-0.9.0]# curl -X 'POST' -H 'Content-Type:application/json' -d @quickstart/wikiticker-index.json http://overlordnode:8090/druid/indexer/v1/task {"task":"index_hadoop_wikiticker_2016-05-24T11:38:51.681Z"} [root@nss03 druid-0.9.0]# I can see that job is submitted to the yarn queue. RM UI error details. SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/hadoop/yarn/local/filecache/10/mapreduce.tar.gz/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/hadoop/yarn/local/filecache/130/log4j-slf4j-impl-2.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
log4j:ERROR setFile(null,true) call failed.
java.io.FileNotFoundException: /hadoop/yarn/log/application_1464036814491_0009/container_e04_1464036814491_0009_01_000001 (Is a directory)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
at java.io.FileOutputStream.<init>(FileOutputStream.java:133)
at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
at org.apache.hadoop.yarn.ContainerLogAppender.activateOptions(ContainerLogAppender.java:55)
at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104)
at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:842)
at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:648)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:514)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)
at org.slf4j.impl.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:64)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:285)
at org.apache.commons.logging.impl.SLF4JLogFactory.getInstance(SLF4JLogFactory.java:155)
at org.apache.commons.logging.impl.SLF4JLogFactory.getInstance(SLF4JLogFactory.java:132)
at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:657)
at org.apache.hadoop.service.AbstractService.<clinit>(AbstractService.java:43)
May 24, 2016 4:39:11 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
INFO: Registering org.apache.hadoop.mapreduce.v2.app.webapp.JAXBContextResolver as a provider class
May 24, 2016 4:39:11 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
INFO: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class
May 24, 2016 4:39:11 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
INFO: Registering org.apache.hadoop.mapreduce.v2.app.webapp.AMWebServices as a root resource class
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache YARN