Member since
04-16-2019
373
Posts
7
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
23938 | 10-16-2018 11:27 AM | |
7989 | 09-29-2018 06:59 AM | |
1226 | 07-17-2018 08:44 AM | |
6801 | 04-18-2018 08:59 AM |
04-09-2018
09:24 PM
1 Kudo
@Anurag Mishra fs.defaultFS The fs.defaultFS makes HDFS a file abstraction over a cluster, so that its root is not the same as the local system's. You need to change the value in order to create the distributed file system. The fs.defaultFS in core-site.xml gives the datanode address of namenode. The datanode looks here for the namenode address and tries to contact it using RPC. Without setting the fs.defaultFS the command $ hdfs dfs -ls / would initially show the local root filesystem as below $ hdfs hadoop fs -ls /
Warning: fs.defaultFS is not set when running "ls" command.
Found 21 items
dr-xr-xr-x - root root 4096 2017-05-16 20:03 /boot
drwxr-xr-x - root root 3040 2017-06-07 18:31 /dev
drwxr-xr-x - root root 8192 2017-06-10 07:22 /etc
drwxr-xr-x - root root 56 2017-06-10 07:22 /home
................
.............
drwxr-xr-x - root root 167 2017-06-07 19:43 /usr
drwxr-xr-x - root root 4096 2017-06-07 19:46 /var dfs.namenode.http-address The location for the NameNode URL in the hdfs-site.xml configuration file e.g <property>
<name>dfs.namenode.http-address</name>
<value>node1.texas.us:50070</value>
<final>true</final>
</property> The NameNode HTTP server address is controlled by configuration property dfs.namenode.http-address in hdfs-site.xml. Typically this specifies a hostname or IP address, and this maps to a single network interface like above but you can tell it to bind to all network interfaces by setting property dfs.namenode.http-bind-host to 0.0.0.0 (the wildcard address, matching all network interfaces). This is the base port where the dfs namenode web ui will listens to.It's good to make the name node HTTP server listen on all interfaces by setting it to 0.0.0.0 this will require a reboot/restart of NameNode Hope that clarifies for you the difference
... View more
04-09-2018
11:27 PM
@Anurag Mishra Similarly for HBase processes you can run the following command to find the Uptime of HBase Master & HBase Region Server. # ps -ef | grep `cat /var/run/hbase/hbase-hbase-master.pid`| awk 'NR==1{print $5 " - " $7}'
# ps -ef | grep `cat /var/run/hbase/hbase-hbase-regionserver.pid`| awk 'NR==1{print $5 " - " $7}' .
... View more
04-05-2018
07:30 AM
1 Kudo
@Anurag Mishra LDAP authentication is configured by adding a "ShiroProvider" authentication provider to the cluster's topology file. When enabled, the Knox Gateway uses Apache Shiro ( org.apache.shiro.realm.ldap.JndiLdapRealm ) to authenticate users against the configured LDAP store. Please go through this document link 1. Shiro Provider is Knox side code and integrated. You need not worry about it's internal and change admin.xml (Admin topology) i.e. for Knox Administrators to proper LDAP/AD related values. For general usage, use default topology for services integration. 2. Read above documentation. 3. Read above documentation. 4. Make a group of users, you want to give access and whitelist them using ACL.
... View more
03-27-2018
07:36 AM
@Anurag Mishra, 1) You can run the below command oozie job -oozie http://<host>:11000/oozie/ -log {oozieJobId} 2) Yes. The logs will also be saved under /var/log/oozie directory. . -Aditya
... View more
03-21-2018
05:46 PM
@Anurag Mishra
Below package should be enough to only install the client libraries. spark_<version>_<build>.noarch :Lightning-FastClusterComputing
ex: spark2_2_6_4_0_91-2.2.0.2.6.4.0-91.noarch P.S If you are installing the client manually make sure you copy configuration files /etc/spark2/conf/ from the node which is managed by ambari. This way configurations will be same across the cluster.
... View more
04-03-2018
09:10 PM
Hi @Anurag Mishra please accept the answer if it resolved your issue.
... View more
03-13-2018
09:07 AM
@Jay Kumar SenSharma Thanks jay, It is working now . PasswordAuthentication yes it's value was set to no
... View more
03-09-2018
01:47 PM
1 Kudo
If you have some virtualization with fault tolerance option and shared storage (like VMware esxi, etc.) I will recommend you to install Ambari Server there.
... View more
02-28-2018
02:20 PM
1 Kudo
@Anurag Mishra please check https://hortonworks.com/tutorial/tag-based-policies-with-apache-ranger-and-apache-atlas/ .
... View more
02-21-2018
07:56 AM
@Jay Kumar SenSharma Hi Jay, curl -v "http://amb25102.example.com:6188/ws/v1/timeline/metrics?metricNames=bytes_in._rate._avg&hostname=&appId=HOST&instanceId=&startTime=1451630974&endTime=1519110315" This rest api not working for For All kind of metrics:. I have replaced bytes_in._rate._avg to master.Server.numDeadRegionServers curl -v "http://amb25102.example.com:6188/ws/v1/timeline/metrics?metricNames=master.Server.numDeadRegionServers&hostname=&appId=HOST&instanceId=&startTime=1451630974&endTime=1519110315" but not able to get the metrics results .
... View more