Member since
01-25-2016
345
Posts
86
Kudos Received
25
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4998 | 10-20-2017 06:39 PM | |
3532 | 03-30-2017 06:03 AM | |
2585 | 02-16-2017 04:55 PM | |
16096 | 02-01-2017 04:38 PM | |
1141 | 01-24-2017 08:36 PM |
11-02-2017
07:10 PM
Yes had the same issue with comma seperated values,Pipe seperation did fix it. Thanks
... View more
12-13-2016
04:44 PM
After running ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar Everything works - restart & service check ! Thank you so much !
... View more
12-07-2016
03:26 PM
@Baruch AMOUSSOU DJANGBAN
you can do this as well.If you have installed the Cluster Shell in cluster, we can perform below simple steps to stop and start ----------------------------------- #!/bin/sh clush -g all ambari-agent restart ------------------------------------------- Refer below link for more info about open source Cluster shell: https://github.com/cea-hpc/clustershell/downloads
... View more
08-23-2016
10:28 PM
3 Kudos
@Kumar Veerappana
Assuming that you are only interested who has access to Hadoop services, extract all OS users from all nodes by checking /etc/passwd file content. Some of them are legitimate users needed by Hadoop tools, e.g. hive, hdfs, etc.For hdfs, they will have a /user/username folder in hdfs. You can see that with hadoop -fs ls -l /user executed as a user member of the hadoop group. If they have access to hive client, they are able to also perform DDL and DML actions in Hive. The above will allow you to understand the current state, however, this is your opportunity to improve security even without the bells and whistles of Kerberos/LDAP/Ranger. You can force the users to access Hadoop ecosystem client services via a few client/edge nodes, where only client services are running, e.g. Hive client. Users, other than power users, should not have accounts on name node, admin node or data nodes. Any user that can access those nodes where client services are running can access those services, e.g. hdfs or Hive.
... View more
07-25-2017
03:41 AM
I ran into same issue but it's automatically fixed after re-starting my data node server (re-boot physical linux server).
... View more
08-10-2016
03:29 PM
@thejas I tested with insert script, basically writes.
... View more
10-24-2016
08:59 PM
Hi @Gerrit Slot
Posting my comment here just in case: We have identified an issue with the current TP #1.7 and working to correct. Things should be squared away shortly with the repositories. Thank you!
... View more
08-03-2016
08:31 AM
2 Kudos
Foowloing are the changes in cent os 7 :-
New initialization system, systemd. New firewall control, firewalld. This adds a more dynamic and flexible way to control the firewall module in the kernel, which is still netfilter. New bootloader. GRUB2 adds rich scripting support as well as support for the new hardware options offered on modern mainboards. New default filesystem, XFS. XFS adds support for larger single filesystems, faster format times (0 seconds), integrated snapshots, and live filesystem dumps for backup without first unmounting. GNOME 3 – This only really applies to those who use RHEL/CentOS in the desktop, like me. As with any other distro, you aren’t locked into GNOME 3. I personally like it, but KDE is readily available and others can be found on EPEL. If you are used to previous versions you may want to stick with 6. 7 has a lot of command changes And in our env we are using Cent OS 7. Still Didn't face issue with performance etc..
... View more
07-13-2016
09:01 AM
I have added below properties in advanced log4j properties and spark is creating logs in local directory. log4j.appender.rolling=org.apache.log4j.RollingFileAppender log4j.appender.rolling.encoding=UTF-8 log4j.appender.rolling.layout=org.apache.log4j.PatternLayout log4j.appender.rolling.layout.conversionPattern=[%d] %p %m (%c)%n log4j.appender.rolling.maxBackupIndex=5 log4j.appender.rolling.maxFileSize=50MB log4j.logger.org.apache.spark=WARN log4j.logger.org.eclipse.jetty=WARN
log4j.rootLogger=INFO, rolling
#log4j.appender.rolling.file=${spark.yarn.app.container.log.dir}/spark.log log4j.appender.rolling.file=/var/log/spark/spark.log ${spark.yarn.app.container.log.dir}/spark.log doesn't work for me to write logs in HDFS.
... View more
06-30-2016
02:31 PM
Thanks Divakar
... View more