Member since
01-18-2017
24
Posts
8
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
12065 | 06-16-2017 12:14 AM | |
32620 | 02-24-2017 06:26 AM |
07-02-2017
11:54 PM
1 Kudo
Hello, If the Hue Kerberos Ticket Renewer does not start, check your KDC configuration and the ticket renewal property, maxrenewlife, for the hue/<hostname> and krbtgt principals to ensure they are renewable. If not, running the following commands on the KDC will enable renewable tickets for these principals: kadmin.local: modprinc -maxrenewlife 90day krbtgt/MY.REALM
kadmin.local: modprinc -maxrenewlife 90day +allow_renewable hue/my-hostname@MY.REALM Thanks! Laith
... View more
06-27-2017
08:19 AM
Hello, I would check the Cloudera Manager Server log which may have a more helpful information of this failure (in /var/log/cloudera-scm-server/) from the time when the start command failed. Thanks Laith
... View more
06-19-2017
01:22 AM
1 Kudo
Hello DDreaner94, Thanks for confirning that the docunmation works for you, and I agree with @mbigelow, CDH itself isn't really doing the checking so much as relying on system time. The NTP check and Chrony check are doing by the Agent since there are some time sensitive components in the stack. Please let us know if you have any other concers.. Have a nice day! Laith
... View more
06-16-2017
12:14 AM
1 Kudo
Hello DDreamer94! CDH requires that you configure the NTP service on each machine in your cluster. For more information, please take a look into: https://www.cloudera.com/documentation/enterprise/latest/topics/install_cdh_enable_ntp.html Thanks, Laith
... View more
06-15-2017
12:43 AM
Hello DemonBY, This IP is a translation for archive.cloudera.com, would you please post the output of this command: $ sudo ps ex | grep 151.101.36.167 Thanks! Laith
... View more
05-19-2017
08:52 AM
Hello there, For internal networking, you can control the host names that is associated with the other servers IP addresses in /etc/hosts file. Thanks Laith
... View more
05-15-2017
03:05 PM
Hello Vihoop, Thanks for your question. /etc/hosts is not the only place to set/configure the hostnames of hosts. A valid public DNS record could be another resolver for your hosts in case you have defined your hosts' hostname with a public DNS provider (A or CNAME records). However, Linux systems usually resolve the hostname as per the order of the resolver in /etc/nsswitch.conf file. There is a lot of standard DNS commands that exam your hostname resolution, like: dig, host, nslookup, whois, and many else.. Respectfully Laith
... View more
04-19-2017
12:03 PM
3 Kudos
Question What are the minimum and maximum cluster sizes?
Answer The minimum cluster size is 3 worker nodes, 1 Master node, and 1 Cloudera Manager node, for a total of five nodes. The maximum cluster size varies based on cluster type and public cloud. Further information about this could be found in this Cloudera Altus Documentation section NOTE: Each cluster spun up by Cloudera Altus requires 2 additional (master) nodes, one for CM and the other for Master services used for each cluster. These 2 nodes are a hard-requirement per cluster as they're required for a functional cluster and will need to be accounted for when calculating the public cloud instance limits.
... View more
- Find more articles tagged with:
- Clusters
Labels:
04-19-2017
11:14 AM
Question What type of access Cloudera Altus Cluster provides for Cloudera Manager user?
Answer Read-only access. User (guest by the default) can't modify or change settings or configurations in Cloudera Manager.
... View more
Labels:
04-19-2017
11:07 AM
Question Can I choose the Linux distribution that a Cloudera Altus Cluster will be running on?
Answer Currently, the selected Linux distribution for deployment in Cloudera Altus Clusters is based on CentOS 7.x 64-bit. At this time, no other Linux distrubtions are available for use with Cloudera Altus.
... View more
- Find more articles tagged with:
- Clusters
04-19-2017
11:04 AM
Question Does Cloudera save AWS user credentials (AWS Access Key ID or AWS Access Key)?
Answer No, Cloudera doesn't save AWS user credentials. If Cloudera Altus environment is set-up using the Cloudera Altus Quickstart, the user is prompted for these credentials, but they are never sent to Cloudera Altus. They are used by the web application residing within the user's brower that sends commands directly to AWS to create necessary resources using a CloudFormation script. In case of the Environment being created via Cloudera Altus Wizard, credentials will never need to appear inside the Cloudera Altus console. User needs to create necessary resources in AWS console and then grant access to them using AWS cross account access mechanism, which does not involve explicit key management.
... View more
Labels:
04-19-2017
11:00 AM
Question Does Cloudera have access to the customer data in Cloudera Altus Clusters?
Answer No. Cloudera Altus clusters are setup with the expectation that data will reside on the cloud provider's object storage (AWS S3, Azure ADLS). When data resides in this object storage (which exists inside a customer account within AWS or Azure), the Cloudera Altus service does not have access to this data.
... View more
Labels:
04-19-2017
10:51 AM
2 Kudos
Question What are the available cluster types in Data Engineering Cluster?
Answer Currently, Cloudera provides clusters that can run the following Data Engineering job types: - Hive on MapReduce - Hive on Spark - Spark on YARN - MapReduce2 - PySpark Note: Job submission for a cluster is restricted to the type of job for that various cluster. For example, if you wanted to run just a MR2 job on a Spark-on-YARN cluster, you cannot submit this, even though Spark is configured to use YARN. You would need to standup a separate YARN cluster for it to accept a MR2/YARN job.
... View more
Labels:
03-31-2017
08:46 AM
1 Kudo
Hello Victor, Thanks for your response! As I can see, the problem is not within your AWS configurations, as you should be able to connect through the security group rules that you configured. However, as you have a null output by running netstat one the server itself, that means your CM is not running or is not configured properly. You'd need to check the status of the service itself inside the server. First, investigate if the server is running or not: service cloudera-scm-server status If it is not running, try to restart the service again, and wait for a minute or two so the service completely started: service cloudera-scm-server restart And while you attempt to login, its good idea to monitor the server log to see what is going on on the cluster level: tail /var/log/cloudera-scm-server/cloudera-scm-server.log -f Let me know if you need any help! Thanks! Laith
... View more
03-30-2017
09:19 AM
1 Kudo
Hello, Thanks for your question. You’d need to identify if it is an AWS or CM issue. So first, run this command on the Cluster level (locally), to verify that you have port 7180 listens: netstat -pltn | grep 7180 To check if it’s an AWS issue, check connectivity over the port 7180 from your end (remotely): nc -v $YOUR_PUBLIC_HOST_NAME 7180 If it doesn’t pass, then check the security group rules of your AWS environment and make sure that you have allowed port 7180/tcp in AWS inbound security group rules. In fact, you'd need to allow all the related ports of CDH in the inbound rules (7180, 7182, 7183, and 7432). Another point to consider, is your server (instance) firewall. Make sure it allows the access over the related port/s. Else, I'd check the CM server log while you are trying to connect for more information. These are a basic steps to start investigating, if you have any concerns or questions, please feel free to update this post! Thanks! Laith Al Obaidy
... View more
02-24-2017
06:26 AM
Hello Rashmi To find out your CM version, you can run CM -> Support -> About. And to find out your CDH version, you can run CM -> Clusters. Best Laith
... View more
02-22-2017
10:50 AM
1 Kudo
Hello again, Vinay As it shown from the output of the executed commands, your system is not registered with RHN, so you won’t be able to download/update your system packages using the default repository. If you haven’t started your RHEL evaluation yet, then do it here https://access.redhat.com/products/red-hat-enterprise-linux/evaluation Also, make sure that your system has registered and subscribed, using these commands: # subscription-manager register
# subscription-manager subscribe Then you can install your missing packages/dependencies. If the problem still exist, I would suggest to check your subscription issue with Red Hat. On the other hand, as a suggestion, you could try CentOS instead of RHEL, so you avoid the subscription issue, as CentOS is a free distribution of Linux. Let me know if you need any help! Thanks, Laith
... View more
02-21-2017
09:21 AM
2 Kudos
Hello VinayRamu, Sounds like your system is getting some sort of issue with your Red Hat Network (RHN) subscription/registration. Please check that you have an active entitlement assigned to your system/server. To confirm, would you please issue the command below and post back the output here. yum info cyrus-sasl-gssapi && yum install cyrus-sasl-gssapi Thanks! Laith
... View more
02-01-2017
12:59 PM
Hello Igor, To create a partition in Linux, you’d need to ‘fdisk’ it first. In your example, (sdb) is the disk, so you’d need to to create the partition (sdb1): fdisk /dev/sdb After that, you’d need to format the new partition into an ext4: mkfs.ext4 /dev/sdb1 Make sure you are mount it correctly in /etc/fstab, just like I stated in my first response, ‘mount -a’ command is a good way to examine your fstab entries. In regards to the HDFS block size, the block division in HFDS is just logically built over the physical blocks of the ext4 filesystem; HDFS blocks are large compared to disk blocks, and the reason for this is to minimize the cost of seeks. If the block is large enough, the time it takes to transfer the data from the disk can be significantly longer than the time to seek to the start of the block. If there are any additional questions, please let me know. Thanks, Laith
... View more
01-25-2017
11:27 AM
Hello Igor! You can start building your cluster using any of Cloudera CDH supported file systems; ext3, ext4 and XFS. Avoid using LVM partitioning method (which is the default partitioning method in CentOS6 and 7, but use the manual disk partitioning instead). And yes, the recommended option for mounting in /etc/fstab is just like you stated. /dev/sdb1 /data1 ext4 defaults,noatime 0 For more information please take a look into this article. https://www.cloudera.com/documentation/enterprise/5-6-x/topics/install_cdh_file_system.html Thanks Laith
... View more