Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 869 | 06-04-2025 11:36 PM | |
| 1443 | 03-23-2025 05:23 AM | |
| 722 | 03-17-2025 10:18 AM | |
| 2597 | 03-05-2025 01:34 PM | |
| 1719 | 03-03-2025 01:09 PM |
04-24-2018
07:22 AM
@Victor Hely Have you tried restarting Atlas that means it has stale config? Can you also share the logs in /var/log/atlas try to drill down the failed atlas service and paste the error in her
... View more
04-24-2018
07:19 AM
@Michael Bronson Above you are trying to fix corrupt HDFS blocks !! With the default replication factor of 3, you should be okay and below is fixing the filesystem What is your filesystem type ext4 or? You can run # e2fsck -y /dev/sdc You will not have an opportunity to validate the corrections being applied. On the other hand if you run # e2fsck -n /dev/sdc You can see what would happen without it actually being applied and if you run you'll be asked each # e2fsck /dev/sdc time a significant correction needs to be applied.
... View more
04-23-2018
08:17 PM
@Michael Bronson There could be a couple of reasons, lete check the obvious have you checked SE Linux on this host? if not $ echo 0 >/selinux/enforce
$ cat /selinux/enforce # should output "0" Read-only filesystem" is not a permissions issue. The mount has become read-only, either because of errors in the filesystem or problems in the device itself. If you run "grep sdc /proc/mounts" you should see it as "ro". There may be some clue as to why in the messages in /var/log/syslog. Run File system check fsck it will repair some of the errors e.g execute the fsck on an unmounted file system to avoid any data corruption issues. e.g # fsck /dev/sdc That should repair the damages.
... View more
04-23-2018
05:58 PM
@Michael Bronson Could you try umounting and mount that disk? Your disk could have gone bac and the FS is in Read-Only mode Can you also set the failure tolerance to 1 Using Ambari UI--> HDFS-->Configs---Filter in for property "dfs.datanode.failed.volumes.tolerated" set it to 1 Restart stale HDFS services All should be in order
... View more
04-23-2018
03:59 PM
@Victor Hely Make sure your hbase is up and all green then retry!!!
... View more
04-23-2018
07:44 AM
@Christian Lunesa If you are only interested in the number and not to display all lines in the table then try select count(1) from table_name; That should be faster, hope that helps !!!
... View more
04-23-2018
07:44 AM
@Christian Lunesa If you are only interested in the number and not to display all lines in the table then try select count(1) from table_name; That should be faster, hope that helps !!!
... View more
04-23-2018
07:29 AM
@Swaapnika Guntaka Hey don't panic the files is right in there, the .Trash hides some subdirectories /Current/user see below. Replace {xxx} with the hdfs who deleted the file and after the last / you will see all the files deleted and are not yet expurged from HDFS in your case 360. As the user or hdfs run the below $hdfs dfs -ls /user/{xxx}/.Trash/Current/user/{xxx}/ and to restore the file $ hdfs dfs -cp /user/{xxx}/.Trash/Current/user/{xxx}/deleted_file /user/{xxx}/ Hope that helps
... View more
04-20-2018
12:58 PM
@Saravana V Hive CLI is deprecated HiveServer2 in Hadoop 2.0 introduced its own CLI called Beeline, which is a JDBC client based on SQLLine. Due to new development being focused on HiveServer2, Hive CLI will soon be deprecated in favor of Beeline. To use the new Hive CLI on top of Beeline to implement the Hive CLI functionality set export USE_DEPRECATED_CLI=false I came across a case where ATLAS doesn't see objects (database or tables) created from Hive CLI so try using the beeline at the console type $ beeline
$ ! connect jdbc:hive2://{hive_host}:10000/{database} You will be prompted for the username and the password.
... View more
04-20-2018
09:07 AM
@Sriram Hadoop You will need a jaas.conf file for solr. Here is the documentation
... View more