Member since
09-17-2015
103
Posts
61
Kudos Received
18
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2340 | 06-15-2017 11:58 AM | |
2183 | 06-15-2017 09:18 AM | |
2922 | 06-09-2017 10:45 AM | |
1433 | 06-07-2017 03:52 PM | |
3137 | 01-06-2017 09:41 PM |
11-26-2021
04:28 AM
@nikkie_thomas You can set below if you are using Tez set hive.merge.mapfiles=true; set hive.merge.mapredfiles=true; set hive.merge.smallfiles.avgsize=<some value>; set hive.merge.size.per.task=<some value>; set hive.merge.tezfiles=true;
... View more
08-11-2021
08:47 AM
Some people use the Boto3 library to browse their Amazon bucket from Python, and I was searching the same for Azure. This is far from being optimized, but it could be a starting point.
First things first, we need to find the Azure access token. Notice those keys are supposed to rotate so you have to have that in mind.
In Azure portal, let's get to the Storage Account in the ResourceGroup defined for your account, and click on Access keys
There are two keys (for rotation without interruption), let's copy the first one.
In my CML project, I'm defining an AZURE_STORAGE_TOKEN environment variable with that key:
As you see above, 'STORAGE' variable has been populated. If you want it to be automatically populated, here's some code:
!pip3 install git+https://github.com/fletchjeff/cmlbootstrap#egg=cmlbootstrap
from cmlbootstrap import CMLBootstrap
# Instantiate API Wrapper
cml = CMLBootstrap()
# Set the STORAGE environment variable
try :
storage=os.environ["STORAGE"]
except:
storage = cml.get_cloud_storage()
storage_environment_params = {"STORAGE":storage}
storage_environment = cml.create_environment_variable(storage_environment_params)
os.environ["STORAGE"] = storage
Now the project! Install the required libraries:
pip3 install azure-storage-file-datalake
Here is the code listing files on the "datalake" path. This is not handling all exceptions and so on, that's really a starting point only and not meant to be used in a production environment.
!pip3 install azure-storage-file-datalake
import os, uuid, sys, re
from azure.storage.filedatalake import DataLakeServiceClient
from azure.core._match_conditions import MatchConditions
from azure.storage.filedatalake._models import ContentSettings
def initialize_storage_account(storage_account_name, storage_account_key):
try:
global service_client
service_client = DataLakeServiceClient(account_url="{}://{}.dfs.core.windows.net".format(
"https", storage_account_name), credential=storage_account_key)
except Exception as e:
print(e)
def list_directory_contents(path):
try:
file_system_client = service_client.get_file_system_client(container)
paths = file_system_client.get_paths(path)
for path in paths:
print(path.name)
except Exception as e:
print(e)
storage = os.environ['STORAGE']
storage_account_key = os.environ['AZURE_STORAGE_TOKEN']
m = re.search('abfs://(.+?)@(.+?)\.dfs.core\.windows\.net', storage)
if m:
container = m.group(1)
storage_name = m.group(2)
initialize_storage_account(storage_name, storage_account_key)
list_directory_contents("datalake")
Happy browsing!
... View more
09-22-2020
02:00 AM
2 Kudos
In Cloudera Machine Learning experience (or CDSW for the on-prem version), projects are backed with git. You might want to use GitHub on your projects, so here is a simple way to do that.
First things first: there are basically two ways of interacting with git/GitHub: HTTPS or SSH; We'll use the latter to make the authentication easy. You might also consider SSO or 2FA for enhancing security, here we'll focus on the basics.
To make this authentication going on under the hood, copy our SSH key from CML to Github.
Find your SSH key in the Settings of CML:
Copy that key and add it in Github, under the SSH and GPG keys in your github.com settings: Add SSH key.
Put cdsw in the Title and paste your ssh content in the Key:
Let's start with creating a new project on github.com:
The important thing here is the access mode we want to use: SSH
In CML, start a new project with a template:
Open a Terminal window in a new session:
Convert the project to a git project: cdsw@qp7h1qllrh9dx1hd:~$ git init
Initialized empty Git repository in /home/cdsw/.git/
Add all files to git: cdsw@qp7h1qllrh9dx1hd:~$ git add .
Commit of the project in GitHub: cdsw@qp7h1qllrh9dx1hd:~$ git commit -m "initial commit"
[master (root-commit) 5d75525] initial commit
47 files changed, 14086 insertions(+)
create mode 100755 .gitignore
create mode 100644 LICENSE.txt
create mode 100755 als.py
[...]
Add a remote origin server with the "URL" of the remote repository where your local repository will be pushed: cdsw@qp7h1qllrh9dx1hd:~$ git remote add origin git@github.com:laurentedel/MyProject.git
Make the current Git branch a master branch: cdsw@qp7h1qllrh9dx1hd:~$ git branch -M master
Finally, push the changes (so all files for the first commit) to our master, so on github.com: cdsw@qp7h1qllrh9dx1hd:~$ git push -u origin master
The authenticity of host 'github.com (140.82.113.4)' can't be established.
RSA key fingerprint is SHA256:nThbg6kXUpJWGl7E1IGOCspRomTxdCARLviKw6E5SY8.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'github.com,140.82.113.4' (RSA) to the list of known hosts.
Counting objects: 56, done.
Delta compression using up to 16 threads.
Compressing objects: 100% (46/46), done.
Writing objects: 100% (56/56), 319.86 KiB | 857.00 KiB/s, done.
Total 56 (delta 1), reused 0 (delta 0)
remote: Resolving deltas: 100% (1/1), done.
To github.com:laurentedel/MyProject.git
* [new branch] master -> master
Branch 'master' set up to track remote branch 'master' from 'origin'.
There you go!
Now we can use the git commands are used to Modify file(s): cdsw@qp7h1qllrh9dx1hd:~$ echo "# MyProject" >> README.md
What's our status? cdsw@qp7h1qllrh9dx1hd:~$ git status
On branch master
Your branch is up to date with 'origin/master'.
Untracked files:
(use "git add <file>..." to include in what will be committed)
README.md
nothing added to commit but untracked files present (use "git add" to track)
Commit/push: cdsw@qp7h1qllrh9dx1hd:~$ git add README.md
cdsw@qp7h1qllrh9dx1hd:~$ git commit -m "adding a README"
[master 7008e88] adding a README
1 file changed, 1 insertion(+)
create mode 100644 README.md
cdsw@qp7h1qllrh9dx1hd:~$ git push -u origin master
Warning: Permanently added the RSA host key for IP address '140.82.114.4' to the list of known hosts.
Counting objects: 3, done.
Delta compression using up to 16 threads.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (3/3), 290 bytes | 18.00 KiB/s, done.
Total 3 (delta 1), reused 0 (delta 0)
remote: Resolving deltas: 100% (1/1), completed with 1 local object.
To github.com:laurentedel/MyProject.git
5d75525..7008e88 master -> master
Branch 'master' set up to track remote branch 'master' from 'origin'.
Happy commits!
... View more
12-03-2018
04:28 PM
This article has been set on a HDP 2.5.3 version, you may consider adjusting some parameters to reflect your actual version. We'll here set Kafka loglevel through the Logging MBean with jConsole. For that, the first step is to enable JMX access: add in Kafka configs/kafka-env template export KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.local.only=false"
export JMX_PORT="9999"
For avoiding JMX port conflicts like mentioned in https://community.hortonworks.com/articles/73750/kafka-jmx-tool-is-failing-with-port-already-in-use.html,
let’s modify /usr/hdp/current/kafka/bin/kafka-run-class.sh on all broker
nodes: replace # JMX port to use
if [ $JMX_PORT ]; then
KAFKA_JMX_OPTS="$KAFKA_JMX_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT"
fi
with # JMX port to use
if [ $ISKAFKASERVER = "true" ]; then
JMX_REMOTE_PORT=$JMX_PORT
else
JMX_REMOTE_PORT=$CLIENT_JMX_PORT
fi
if [ $JMX_REMOTE_PORT ]; then
KAFKA_JMX_OPTS="$KAFKA_JMX_OPTS -Dcom.sun.management.jmxremote.port=$JMX_REMOTE_PORT"
fi After brokers has been restarted, lets modify the logLevel with
jConsole: $ jconsole <BROKER_FQDN>:<JMX_PORT> It launches a jconsole window, asking for Retry insecurely, go ahead
with that go to the Mbeans tab then Kafka/kafka.log4jController/Attributes, and double-click on the Value of Loggers to get all Log4j controllers You can see the kafka logger above those presented is set to INFO. We can check it using the getLogLevel Operations entering the kafka
loggerName Fortunately, you can also set the value without restarting with the
setLogLevel operation, putting in DEBUG or TRACE for example.
... View more
Labels:
10-17-2017
10:09 AM
2 Kudos
When starting spark-shell, it tries to bind to port 4040 for the SparkUI. If that port is already taken because of another spark-shell session active, it tries then to bind on 4041, then 4042, etc. Each time the binding didn't suceed, there's a huge WARN stack trace which could be filtered [user@serv hive]$ SPARK_MAJOR_VERSION=2 spark-shell
SPARK_MAJOR_VERSION is set to 2, using Spark2
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
17/09/20 11:49:43 WARN AbstractLifeCycle: FAILED ServerConnector@2d258eff{HTTP/1.1}
{0.0.0.0:4040}: java.net.BindException: Address already in use
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.spark_project.jetty.server.ServerConnector.open(ServerConnector.java:321)
at org.spark_project.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)
at org.spark_project.jetty.server.ServerConnector.doStart(ServerConnector.java:236)
at org.spark_project.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.apache.spark.ui.JettyUtils$.org$apache$spark$ui$JettyUtils$$newConnector$1(JettyUtils.scala:333)
at org.apache.spark.ui.JettyUtils$.org$apache$spark$ui$JettyUtils$$httpConnect$1(JettyUtils.scala:365)
at org.apache.spark.ui.JettyUtils$$anonfun$7.apply(JettyUtils.scala:368)
at org.apache.spark.ui.JettyUtils To filter that stacktrace, let's put that class log4j verbosity in ERROR level in /usr/hdp/current/spark2-client/conf/log4j.properties # Added for not having stack traces when binding to SparkUI
log4j.logger.org.spark_project.jetty.util.component.AbstractLifeCycle=ERROR
... View more
Labels:
01-18-2017
02:17 PM
On RHEL/CentOS you might encounter an exception when trying to stop or restart Oozie : resource_management.core.exceptions.Fail: Execution of 'cd /var/tmp/oozie && /usr/hdp/current/oozie-server/bin/oozie-stop.sh' returned 1. -bash: line 0: cd: /var/tmp/oozie: No such file or directory This is likely because of a shell crontab /etc/cron.daily/tmpwatch which delete files/directories unmodified for 30d+ [root@local ~]# cat /etc/cron.daily/tmpwatch
#! /bin/sh
flags=-umc
/usr/sbin/tmpwatch "$flags" -x /tmp/.X11-unix -x /tmp/.XIM-unix \
-x /tmp/.font-unix -x /tmp/.ICE-unix -x /tmp/.Test-unix \
-X '/tmp/hsperfdata_*' 10d /tmp
/usr/sbin/tmpwatch "$flags" 30d /var/tmp
for d in /var/{cache/man,catman}/{cat?,X11R6/cat?,local/cat?}; do
if [ -d "$d" ]; then
/usr/sbin/tmpwatch "$flags" -f 30d "$d"
fi
done
Just recreate the directory and you're good to go [root@local ~]# mkdir /var/tmp/oozie
[root@local ~]# chown oozie:hadoop /var/tmp/oozie
[root@local ~]# chmod 755 /var/tmp/oozie
... View more
Labels:
09-14-2016
12:46 PM
if you loose 2 ZK then NN will stay up but if it goes down the failover won't occur. if you loose 2 JNs your NNs will go down.
... View more
09-08-2016
07:02 AM
thanks @Junping Du
... View more
08-24-2016
08:21 PM
2 Kudos
For maintenance mode, you could always tun off the maintenance mode and enable the service manually. Some of the components are best left off when you don't really need them because some of them are really resource hogs. Generally, HDFS, MapReduce2, YARN and Hive should be green the makes most things working.
... View more
07-27-2016
07:24 PM
1 Kudo
@bigdata.neophyte: We have a single node HDP 2.3 VM where kerberos, Ranger, Ranger KMS enabled available for download here This was done as part of security workshop/webinar we did: https://github.com/abajwa-hw/security-workshops#current-release
... View more