Member since
04-03-2019
962
Posts
1743
Kudos Received
146
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
11421 | 03-08-2019 06:33 PM | |
4864 | 02-15-2019 08:47 PM | |
4148 | 09-26-2018 06:02 PM | |
10542 | 09-07-2018 10:33 PM | |
5588 | 04-25-2018 01:55 AM |
06-29-2017
06:58 PM
@Matt Clarke - Perfect! Thanks.
... View more
06-29-2017
06:50 PM
@Matt Clarke @Wynner
... View more
06-29-2017
06:49 PM
Was reading this - https://nifi.apache.org/docs.html ( Configuring State Providers ) Can we specify more than one directory for WriteAheadLocalStateProvider? E.g. By default it has below property in state-management.xml <local-provider>
<id>local-provider</id>
<class>org.apache.nifi.controller.state.providers.local.WriteAheadLocalStateProvider</class>
<property name="Directory">/var/lib/nifi/state/local</property>
</local-provider> Can I add one more disk partition as a value of Directory in above xml? If yes, How to do that? FYI: Below does not work. <local-provider>
<id>local-provider</id>
<class>org.apache.nifi.controller.state.providers.local.WriteAheadLocalStateProvider</class>
<property name="Directory">/var/lib/nifi/state/local</property>
<property name="Directory">/blahblah</property>
</local-provider>
... View more
Labels:
- Labels:
-
Apache NiFi
06-24-2017
12:29 AM
@Matt Clarke - Very nicely explained! 🙂
... View more
06-19-2017
07:37 PM
1 Kudo
These steps have been successfully tried and tested on Ambari-2.4.2.0 and HDP-2.5.3. . When you install and configure Solr cloud via Ambari or embedded solr via Ambari Infra on a kerberized cluster, SPNEGO authentication gets enabled by default. There is not direct switch to disable only SPNEGO authentication. . Please follow below method to disable it. . Step 1 - Login to Ambari server . Step 2 - Please take backup of below script /var/lib/ambari-server/resources/common-services/SOLR/<version>/package/scripts/setup_solr_kerberos_auth.py . Step 3 - Edit above script and do the modifications as below Original value:
command += '\'{"authentication":{"class": "org.apache.solr.security.KerberosPlugin"}}\''
Recommended value:
command += '\'{ }\'' . Step 4 - Please replace cached script with below command and restart ambari-agent followed by Solr service(via Ambari) cp /var/lib/ambari-server/resources/mpacks/solr-ambari-mpack-5.5.2.2.5/common-services/SOLR/5.5.2.2.5/package/scripts/setup_solr_kerberos_auth.py /var/lib/ambari-agent/cache/common-services/SOLR/5.5.2.2.5/package/scripts/setup_solr_kerberos_auth.py . You should be able to access Solr Web UI without having kerberos ticket. . Please comment if you have any feedback/questions/suggestions. Happy Hadooping!!
... View more
Labels:
06-02-2017
12:01 AM
3 Kudos
According to default Oozie log4j configuration in Ambari - Log get's rotated by every hour and retention is set to 30 days. . log4j.appender.oozie=org.apache.log4j.rolling.RollingFileAppender
log4j.appender.oozie.RollingPolicy=org.apache.oozie.util.OozieRollingPolicy
log4j.appender.oozie.File=${oozie.log.dir}/oozie.log
log4j.appender.oozie.Append=true
log4j.appender.oozie.layout=org.apache.log4j.PatternLayout
log4j.appender.oozie.layout.ConversionPattern=%d{ISO8601} %5p %c{1}:%L - SERVER[${oozie.instance.id}] %m%n
# The FileNamePattern must end with "-%d{yyyy-MM-dd-HH}.gz" or "-%d{yyyy-MM-dd-HH}" and also start with the
# value of log4j.appender.oozie.File
log4j.appender.oozie.RollingPolicy.FileNamePattern=${log4j.appender.oozie.File}-%d{yyyy-MM-dd-HH}
# The MaxHistory controls how many log files will be retained (720 hours / 24 hours per day = 30 days); -1 to disable
log4j.appender.oozie.RollingPolicy.MaxHistory={{oozie_log_maxhistory}} . If you want to configure DRFA to roll the log file daily, please set below parameters in log4j section of Oozie configuration via Ambari and restart required services. log4j.appender.oozie=org.apache.log4j.DailyRollingFileAppender
log4j.appender.oozie.File=${oozie.log.dir}/oozie.log
log4j.appender.oozie.Append=true
log4j.appender.oozie.layout=org.apache.log4j.PatternLayout
log4j.appender.oozie.layout.ConversionPattern=%d{ISO8601} %5p %c{1}:%L - SERVER[${oozie.instance.id}] %m%n
log4j.appender.oozie.DatePattern='.'yyyy-MM-dd . Please note that DRFA does not support MaxBackupIndex hence if you want retention then you can go with RFA size based rolling and use MaxBackupIndex . Please comment if you have any feedback/questions/suggestions. Happy Hadooping!!
... View more
Labels:
05-19-2017
01:15 AM
1 Kudo
Short Description: How to run sample Oozie sqoop action to get data from Mysql table to HDFS. Article Below are the steps to run sample sqoop action to get data from Mysql table on HDFS. Note - Please refer this to create sample Mysql table with dummy data. . 1. Configure job.properties Example: nameNode=hdfs://<namenode-host>:8020
jobTracker=<rm-host>:8050
queueName=default
examplesRoot=examples
oozie.use.system.libpath=true
oozie.wf.application.path=${nameNode}/user/${user.name}
oozie.libpat=/user/root . 2. Configure Workflow.xml Example: <?xml version="1.0" encoding="UTF-8"?>
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<workflow-app xmlns="uri:oozie:workflow:0.2" name="sqoop-wf">
<start to="sqoop-node"/>
<action name="sqoop-node">
<sqoop xmlns="uri:oozie:sqoop-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
<command>import --connect jdbc:mysql://<mysql-server-hostname>:3306/<database-name> --username <mysql-database-username> --table <table-name> --driver com.mysql.jdbc.Driver --m 1</command>
</sqoop>
<ok to="end"/>
<error to="fail"/>
</action>
<kill name="fail">
<message>Sqoop failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
</workflow-app> . 3. Upload workflow.xml and shell script to "oozie.wf.application.path" defined in job.properties . 4. Follow below command to run Oozie workflow oozie job -oozie http://<oozie-server-hostname>:11000/oozie -config /$PATH/job.properties -run . Please comment if you have any question! Happy Hadooping!! 🙂
... View more
Labels:
05-15-2017
04:27 AM
1 Kudo
Below are the steps for Oozie database migration from Derby to Postgresql. . Step 1 - Have Postgresql server installed and ready to be configured. . Step 2 - Stop Oozie service from Ambari UI. . Step 3 - Install Postgresql JDBC connector. yum install postgresql-jdbc . Step 4 - On Ambari Server, run below command ambari-server setup --jdbc-db=postgres --jdbc-driver=/usr/share/java/postgresql-jdbc.jar Note - Please pass appropriate driver if /usr/share/java/postgresql-jdbc.jar does not exists. . . Step 5 - Login to Postgesql DB as postgres user and create a blank 'oozie' database and grant required permissions to the 'oozie' user. [root@ambaview ~]# su - postgres
-bash-4.1$ psql
psql (8.4.20)
Type "help" for help.
— postgres=# CREATE DATABASE oozie;
CREATE DATABASE
postgres=#
— CREATE USER oozie WITH PASSWORD 'oozie’;
postgres=# CREATE USER oozie WITH PASSWORD 'oozie';
CREATE ROLE
postgres=#
— GRANT ALL PRIVILEGES ON DATABASE oozie TO oozie;
postgres=# GRANT ALL PRIVILEGES ON DATABASE oozie TO oozie;
GRANT
postgres=# . Step 6 - Add Oozie Server IP address and 'oozie' user information to pg_hba configuration file and restart postgresql service. host "oozie oozie 17X.2X.X9.2X0/0 md5" to /var/lib/pgsql/data/pg_hba.conf
[root@ambaview ~]# service postgresql restart
Stopping postgresql service: [ OK ]
Starting postgresql service: [ OK ] . Step 7 - Add postgres database server details in Oozie configuration via Ambari UI. . Step 8 - Copy postgresql-jdbc.jar to Oozie's libext directory. cp /usr/share/java/postgresql-jdbc.jar /usr/hdp/<hdp-version>/oozie/libext/ . Step 9 - Prepare Oozie war file /usr/hdp/<version>/oozie/bin/oozie-setup.sh prepare-war
Note - Run above command on oozie server as oozie user.
. Step 10 - Prepare Oozie schema using below command (Run below command on Oozie host as oozie user) /usr/hdp/<version>/oozie/bin/oozie-setup.sh db create -run . Step 11 - Start Oozie server via Ambari. . Happy Hadooping!! Please comment your feedback or questions in the comment section.
... View more
Labels:
04-29-2017
03:04 AM
2 Kudos
@Arpan Rajani Please send an email to certification@hortonworks.com
... View more
04-29-2017
02:59 AM
2 Kudos
@Riccardo Iacomini In addition to above answer, please refer below useful articles for creating custom Ambari alerts https://community.hortonworks.com/articles/38149/how-to-create-and-register-custom-ambari-alerts.html https://community.hortonworks.com/questions/9033/how-to-add-custom-alerts-in-ambari.html
... View more