Member since
10-01-2018
272
Posts
5
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3600 | 09-28-2020 08:05 AM | |
3209 | 04-16-2020 09:20 AM | |
1576 | 04-16-2020 08:48 AM | |
4076 | 04-16-2020 08:10 AM |
06-01-2022
12:06 PM
@yagoaparecidoti If you notice that the high memory utilization moves from tablet server to tablet server, then another candidate is a problem with the schema of one or more tables. The specific symptom I am thinking of is that the tablet size may be too high due to too few partitions; this can drive high memory utilization because the of the amount of information that must be loaded into memory. The fast way to determine whether this is the case is to look at the Charts for the Tablet Server with high memory utilization, and check the size of data across all tablets on that server. Then, divide this number by the number of replicas on the server, and this gives us an avg value of the replica size. It is perfectly possible for this problem to originate with a single table, so if you have one or more tables you know have few partitions, I recommend checking that table specifically. Beyond this will require log analysis, and is better suited to a support case.
... View more
05-26-2022
09:52 AM
These are all reasonable numbers by themselves, so no huge red flags here. My next question is: are the tablets balanced across the Tablet Servers? If you run a kudu cluster ksck, how many replicas does it show on each server? One scenario where we might see something like this is if the cluster was made with 3 Tablet Servers, and then 2 more were added. The reason this would cause an imbalance is that only data which was added to the cluster after those nodes were added would be on those servers, which means fewer tablets running on them, which means less load on the memory. The kudu rebalancer may be an option: $ sudo -u kudu kudu cluster rebalance <master addresses>
... View more
05-25-2022
10:35 AM
The several possible reasons for this, including but not limited to: a too-low value for hard_memory_limit_bytes; too-large tablet sizes; too-high a workload (which is basically insufficient memory but when there is not any more room to increase it). In order to tell more, we need to know more about the system. Can you tell me the following things about your cluster: 1) How many tablet replicas are there in your cluster, total? 2) How much data is in the kudu cluster? This should be available in the Charts section of Cloudera Manager. 3) How many fs_data_dirs are there for your Tablet Servers? 4) How many maintenance manager threads are there? 5) What is the current value of hard_memory_limit_bytes? With this information I can identify whether you are exceeding any basic scaling limits, or have an inefficient arrangement of workload, or have some kind of basic bottleneck that slows things down.
... View more
08-23-2021
09:24 PM
Context As of CDP 7.1.2, Sentry is deprecated with Kudu, and Ranger becomes the solution for Fine-Grained Authorization in Kudu. The question this article is meant to address is what does it mean for the Impala/Kudu stack to enable Ranger for Kudu, especially if Impala is already using Ranger? This is not obvious up front because the information is spread over three different sets of documents. Fundamentally the answer is nothing. The reason for this is straightforward: enabling the Kudu module in Ranger should automatically configure the --trusted_user_acl flag in Kudu to include Impala (if installed) and Hive (if installed). What this flag does is exempt the listed users from being checked against Kudu's authorization model, which in CDP installations mean Ranger policies. So, the correct way to enforce access to Impala and Hive tables, which are stored as Kudu, is the Hadoop-SQL policies set for Impala and Hive. By default, enabling Ranger for Kudu should have no impact on your Impala or Hive operations; any further changes you want to make to authorization should be done in the Hadoop-SQL policies. What is the motivation for using Ranger with Kudu, if that is the case? These tables are all still accessible at the Kudu level by users, and changes made from the Kudu level can cause inconsistencies or data loss. Enabling Ranger at the Kudu level prevents this. There are two other use-cases where policies are set at the Kudu level in cm_kudu: these are Spark and NiFi. These services are on a by-user basis instead of having a service user communicate with Kudu; so the completely normal logic of writing Ranger policies applies. Please see the documentation of the respective service for more information.
... View more
06-20-2021
09:21 PM
The Qualys tool reports vulnerabilities in ZooKeeper, even when the ZooKeeper security configuration is applied (HDP doc, CDP doc).
There are two kinds of reports Qualys makes that are not addressable by Cloudera:
The security guidance keeps several znodes in the affected services as world-readable, like so: /zookeeper/quota sasl:zookeeper:cdrwa,world:anyone:r Any world-readable znode will appear on the scanner, and will require an exception filed for it. This is the position of Qualys, as reported by our customers who use Qualys.
The security guidance does not cover several services. The following components require no action according to our documentation:
Calcite
Knox
MapReduce
Spark
Tez
Zeppelin
The Qualys tool will still report znodes owned by these services.
Note: it is possible to harden the ACLs beyond the Best Practices recommendation in the documentation, and to harden the ACLs of services not covered in the Best Practices. However, Cloudera cannot provide what the correct ACLs are in that case. Testing on the customer's side is required. It is very easy to set the ACLs such that services that need access to the znode will not have it, and this needs to be handled on a znode by znode basis.
If an attempt at hardening the ACLs is going to be made, these suggestions may help:
Try implementing SASL (this is the same method used in most of the Best Practices recommendations)
Try restricting privileges to the service user
If something breaks, try to identify the user that performed the failed action, and add the necessary privileges only for that user
... View more
06-01-2021
08:19 PM
Note: Cloudera does not support antivirus software of any kind.
This article contains recommendations for excluding HDP components and directories from AV scans and monitoring. It is important to note that these recommendations do not apply to each service, and further, some services will have additional items to exclude which are unique to them. These details will be addressed in individual articles dedicated to the service in question.
The three primary locations you will want to exclude from Antivirus are:
Data directories: These can be very large, and therefore take a long time to scan; they can also be very write-heavy, and therefore suffer performance impacts or failures if the AV holds up writes.
Log directories: These are write-heavy.
Scratch directories: These are internal locations used by some services for writing temporary data, and can also cause performance impacts or failures if the AV holds up writes.
In general, consider excluding the following directories and all of their subdirectories:
Installation, Configuration, and Libraries
/hadoop /usr/hdp /etc/hadoop /etc/<component> /var/lib/<component>
Runtime and Logging
/var/run/<component> /var/log/<component>
Scratch and Temp
/var/tmp/<component> /tmp/<component>
Note: The <component> does not only refer to the service name, as a given service may have multiple daemons with their own directories.
Example: ambari-agent and ambari-server.
Across HDP services there are also many user-configurable locations. Most of these can be found in Ambari properties with names like 'service.scratch.dir' and 'service.data.dir'; go to Ambari > Service > Configs > Advanced and search for any property containing "dir", all of which may be considered for exclusion. Instructions for specific services follow:
Ambari:
Note: Ambari has a special requirement in the form of a user-configurable database. I recommend you exclude this database. However, the details of this database are set on installation; the database may be colocated with ambari-server, or on a remote host. Consult with your database administrators for details on the path where the database information is stored; Ambari does not keep this information anywhere in its configuration. If you need details about which database Ambari is using, search for JDBC in the amber.properties file.
# grep 'jdbc' /etc/ambari-server/conf/ambari.properties
Consider excluding the following directories and all of their subdirectories:
Installation, Configuration, and Libraries
/usr/hdp
/usr/lib/ambari-agent
/usr/lib/ambari-server
/etc/hadoop
/etc/ambari-agent
/etc/ambari-server
/var/lib/ambari-agent
/var/lib/ambari-server
Runtime and Logging
/var/run/ambari-agent
/var/run/ambari-server
/var/log/ambari-agent
/var/log/ambari-server
HDFS:
Note: The directories in HDFS are user-configurable. I recommend you exclude these, especially the data directory for the DataNode and the meta directories for the NameNode and JournalNode. These details can be found in the 'hdfs-site.xml' file:
# grep -A1 "dir" /etc/hadoop/conf/hdfs-site.xml
Consider excluding the following directories and all of their subdirectories:
Installation, Configuration, and Libraries
/usr/hdp
/etc/hadoop
/var/lib/hadoop-hdfs
Runtime and Logging
/var/run/hadoop
/var/log/hadoop
Scratch and Temp
/tmp/hadoop-hdfs
Note: HDFS, YARN, MapReduce, and ZooKeeper are mutually interdependent and you are likely to experience unsatisfactory results if you fail to exclude the other components.
YARN:
Note: The directories YARN uses are user-configurable. I recommend you exclude them. These properties can be found in Ambari > YARN > Configs > Advanced:
yarn.nodemanager.local-dirs
yarn.nodemanager.log-dirs
yarn.nodemanager.recovery.dir
yarn.timeline-service.leveldb-state-store.path
yarn.timeline-service.leveldb-timeline-store.path
Consider excluding the following directories and all of their subdirectories:
Installation, Configuration, and Libraries
/usr/hdp
/etc/hadoop
/var/lib/hadoop-yarn
Runtime and Logging
/var/run/hadoop-yarn
/var/log/hadoop-yarn
Note: HDFS, YARN, MapReduce, and ZooKeeper are mutually interdependent and you are likely to experience unsatisfactory results if you fail to also exclude the other components.
MapReduce:
Note: Some directories in MapReduce are user-configurable. I recommend you exclude them. These properties can be found in Ambari > YARN > Configs > Advanced and this one, in particular, should be excluded:
mapreduce.jobhistory.recovery.store.leveldb.path
Consider excluding the following directories and all of their subdirectories:
Installation, Configuration, and Libraries
/usr/hdp
/etc/hadoop
/var/lib/hadoop-mapreduce
Runtime and Logging
/var/run/hadoop-mapreduce
/var/log/hadoop-mapreduce
Note: HDFS, YARN, MapReduce, and ZooKeeper are mutually interdependent and you are likely to experience unsatisfactory results if you fail to also exclude the other components.
ZooKeeper:
Note: ZooKeeper has a user-configurable data directory. I recommend you exclude it. This directory can be found by running the following command:
# grep dataDir /etc/zookeeper/conf/zoo.cfg
Consider excluding the following directories and all of their subdirectories:
Installation, Configuration, and Libraries
/usr/hdp
/etc/hadoop
Runtime and Logging
/var/run/zookeeper
/var/log/zookeeper
Note: HDFS, YARN, MapReduce, and ZooKeeper are mutually interdependent and you are likely to experience unsatisfactory results if you fail to also exclude the other components.
... View more
06-01-2021
08:10 PM
Note: Cloudera does not support Antivirus software of any kind.
This article contains recommendations for excluding CDH components and directories from AV scans and monitoring. It is important to note that these recommendations do not apply to each service, and further, some services will have additional items to exclude which are unique to them. These details will be addressed in individual articles dedicated to the service in question.
The three primary locations you will want to exclude from antivirus are:
Data directories: These can be very large, and therefore, take a long time to scan; they can also be very write-heavy, and therefore suffer performance impacts or failures if the AV holds up writes.
Log directories: These are write-heavy.
Scratch directories: These are internal locations used by some services for writing temporary data, and can also cause performance impacts or failures if the AV holds up writes.
In general, consider excluding the following directories and all of their subdirectories:
Installation, Configuration, and Libraries
/opt/cloudera /etc/<component> /var/lib/<component>
Runtime and Logging
/var/run/<component> /var/log/<component>
Scratch and Temp
/var/tmp/<component> /tmp/<component>
Note: The <component> does not only refer to the service name, as a given service may have multiple daemons with their own directories.
Example: cloudera-scm-server and cloudera-scm-agent.
Across CDH services, there are also many user-configurable locations. Most of these can be found in Cloudera Manager properties with names like service.scratch.dir and service.data.dir; go to CM > Service > Configurations and search for any property containing "dir", all of which may be considered for exclusion. Instructions for specific services follow:
Cloudera Manager:
Note: Cloudera Manager has a special requirement in the form of a user-configurable database. I recommend you exclude this database. However, the details of this database are set on installation; the database may be colocated with cloudera-scm-server, or on a remote host. Consult with your database administrators for details on the path where the database information is stored.
Consider excluding the following directories and all of their subdirectories:
Installation, Configuration, and Libraries
/opt/cloudera/cm /opt/cloudera/cm-agent
/etc/cloudera-scm-agent
/etc/cloudera-scm-server
/var/lib/cloudera-host-monitor /var/lib/cloudera-scm-agent /var/lib/cloudera-scm-eventserver /var/lib/cloudera-scm-server /var/lib/cloudera-scm-server-db /var/lib/cloudera-service-monitor
Runtime and Logging
/var/run/cloudera-scm-agent
/var/run/cloudera-scm-server
/var/log/cloudera-scm-agent /var/log/cloudera-scm-alertpublisher /var/log/cloudera-scm-eventserver /var/log/cloudera-scm-firehose /var/log/cloudera-scm-server
HDFS:
Note: The directories in HDFS are user-configurable. I recommend you exclude these, especially the data directory for the DataNode and the meta directories for the NameNode and JournalNode. These details can be found in the hdfs-site.xml file:
# grep -A1 "dir" /etc/hadoop/conf/hdfs-site.xml
Consider excluding the following directories and all of their subdirectories:
Installation, Configuration, and Libraries
/opt/cloudera
/var/lib/hadoop-hdfs
Runtime and Logging
/var/log/hadoop-hdfs
Scratch and Temp
/tmp/hadoop-hdfs
Note: HDFS, YARN, MapReduce, and ZooKeeper are mutually interdependent and you are likely to experience unsatisfactory results if you fail to exclude the other components.
YARN:
Note: The directories YARN uses are user-configurable. I recommend you exclude them. These properties can be found in Cloudera Manager > YARN > Configuration:
yarn.nodemanager.local-dirs
yarn.nodemanager.log-dirs
yarn.nodemanager.recovery.dir
yarn.timeline-service.leveldb-state-store.path
yarn.timeline-service.leveldb-timeline-store.path
Consider excluding the following directories and all of their subdirectories:
Installation, Configuration, and Libraries
/opt/cloudera
/var/lib/hadoop-yarn
Runtime and Logging
/var/log/hadoop-yarn
Note: HDFS, YARN, MapReduce, and ZooKeeper are mutually interdependent and you are likely to experience unsatisfactory results if you fail to also exclude the other components.
MapReduce:
Note: Some directories in MapReduce are user-configurable. I recommend you exclude them. These properties can be found in Cloudera Manager > MapReduce > Configs, and this one, in particular, should be excluded:
mapreduce.jobhistory.recovery.store.leveldb.path
Consider excluding the following directories and all of their subdirectories:
Installation, Configuration, and Libraries
/opt/cloudera
/var/lib/hadoop-mapreduce
Runtime and Logging
/var/log/hadoop-mapreduce
Note: HDFS, YARN, MapReduce, and ZooKeeper are mutually interdependent and you are likely to experience unsatisfactory results if you fail to also exclude the other components.
ZooKeeper:
Note: ZooKeeper has a user-configurable data directory. I recommend you exclude it. This directory can be found by running the following command:
# grep dataDir /etc/zookeeper/conf/zoo.cfg
Consider excluding the following directories and all of their subdirectories:
Installation, Configuration, and Libraries
/opt/cloudera
Runtime and Logging
/var/log/zookeeper
Note: HDFS, YARN, MapReduce, and ZooKeeper are mutually interdependent and you are likely to experience unsatisfactory results if you fail to also exclude the other components.
... View more
06-01-2021
08:02 PM
Note: Cloudera does not support Antivirus software of any kind. This article contains recommendations for excluding CDP components and directories from AV scans and monitoring. It is important to note that these recommendations do not apply to each service, and further, some services will have additional items to exclude that are unique to them. These details will be addressed in individual articles dedicated to the service in question. The three primary locations you will want to exclude from Antivirus are: Data directories: These can be very large, and therefore, take a long time to scan; they can also be very write-heavy, and therefore suffer performance impacts or failures if the AV holds up writes. Log directories: These are write-heavy. Scratch directories: These are internal locations used by some services for writing temporary data, and can also cause performance impacts or failures if the AV holds up writes. In general, consider excluding the following directories and all of their subdirectories: Installation, Configuration, and Libraries /opt/cloudera /etc/<component> /var/lib/<component> Runtime and Logging /var/run/<component> /var/log/<component> Scratch and Temp /var/tmp/<component> /tmp/<component> Note: The <component> does not only refer to the service name, as a given service may have multiple daemons with their own directories. Example: cloudera-scm-agent and cloudera-scm-server Across CDP services, there are also many user-configurable locations. Most of these can be found in Cloudera Manager properties with names like "service.scratch.dir" and "service.data.dir"; go to Cloudera Manager > Service > Configuration and search for any property containing "dir", all of which may be considered for exclusion. Instructions for specific services follow: Cloudera Manager: Note: Cloudera Manager has a special requirement in the form of a user-configurable database. I recommend you to exclude this database. However, the details of this database are set on installation; the database may be co-located with cloudera-scm-server, or on a remote host. Consult with your database administrators for details on the path where the database information is stored. Consider excluding the following directories and all of their subdirectories: Installation, Configuration, and Libraries /opt/cloudera/cm /opt/cloudera/cm-agent
/etc/cloudera-scm-agent
/etc/cloudera-scm-server
/var/lib/cloudera-host-monitor /var/lib/cloudera-scm-agent /var/lib/cloudera-scm-eventserver /var/lib/cloudera-scm-server /var/lib/cloudera-scm-server-db /var/lib/cloudera-service-monitor Runtime and Logging /var/run/cloudera-scm-agent
/var/run/cloudera-scm-server /var/run/cloudera-scm-server-db
/var/log/cloudera-scm-agent /var/log/cloudera-scm-alertpublisher /var/log/cloudera-scm-eventserver /var/log/cloudera-scm-firehose /var/log/cloudera-scm-server HDFS: Note: The directories in HDFS are user-configurable. I recommend you exclude these, especially the data directory for the DataNode and the meta directories for the NameNode and JournalNode. These details can be found in the hdfs-site.xml file: # grep -A1 "dir" /etc/hadoop/conf/hdfs-site.xml Consider excluding the following directories and all of their subdirectories: Installation, Configuration, and Libraries /opt/cloudera
/var/lib/hadoop-hdfs Runtime and Logging /var/log/hadoop-hdfs Scratch and Temp /tmp/hadoop-hdfs Note: HDFS, YARN, MapReduce, and ZooKeeper are mutually interdependent and you are likely to experience unsatisfactory results if you fail to exclude the other components. YARN: Note: The directories YARN uses are user-configurable. I recommend you exclude them. These properties can be found in Cloudera Manager > YARN > Configuration: yarn.nodemanager.local-dirs
yarn.nodemanager.log-dirs
yarn.nodemanager.recovery.dir
yarn.timeline-service.leveldb-state-store.path
yarn.timeline-service.leveldb-timeline-store.path Consider excluding the following directories and all of their subdirectories: Installation, Configuration, and Libraries /opt/cloudera
/var/lib/hadoop-yarn Runtime and Logging /var/log/hadoop-yarn Note: HDFS, YARN, MapReduce, and ZooKeeper are mutually interdependent and you are likely to experience unsatisfactory results if you fail to also exclude the other components. MapReduce: Note: Some directories in MapReduce are user-configurable. I recommend you exclude them. These properties can be found in Cloudera Manager > MapReduce > Configs, and this one, in particular, should be excluded: mapreduce.jobhistory.recovery.store.leveldb.path Consider excluding the following directories and all of their subdirectories: Installation, Configuration, and Libraries /opt/cloudera
/var/lib/hadoop-mapreduce Runtime and Logging /var/log/hadoop-mapreduce Note: HDFS, YARN, MapReduce, and ZooKeeper are mutually interdependent and you are likely to experience unsatisfactory results if you fail to also exclude the other components. ZooKeeper: Note: ZooKeeper has a user-configurable data directory. I recommend you exclude it. This directory can be found by running the following command: # grep dataDir /etc/zookeeper/conf/zoo.cfg Consider excluding the following directories and all of their subdirectories: Installation, Configuration, and Libraries /opt/cloudera Runtime and Logging /var/log/zookeeper Note: HDFS, YARN, MapReduce, and ZooKeeper are mutually interdependent and you are likely to experience unsatisfactory results if you fail to also exclude the other components.
... View more
09-29-2020
07:17 AM
Now that you have made several attempts to start, are there any notifications in Ambari? What we should see is a series of operations try to run, and alerts for errors that are encountered. The alerts icon is shaped like a bell, and the operations icon is shaped like a gear. If there is nothing in either of those locations, than Ambari is not responding at all. The simplest thing would be to restart the Sandbox from scratch. If you can get terminal access to the containers the cluster is running on, you can restart the ambari-server process on the Ambari host.
... View more
09-28-2020
10:25 AM
I'm glad you decided to try it out! From the screenshot, it looks like all the services are currently in the Stopped state. You should see a big button that says Actions; this will produce a drop down menu with different actions listed. What happens when you click on the Action button, and then select Start All Services?
... View more