Member since
09-18-2015
3274
Posts
1159
Kudos Received
426
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2143 | 11-01-2016 05:43 PM | |
6537 | 11-01-2016 05:36 PM | |
4167 | 07-01-2016 03:20 PM | |
7118 | 05-25-2016 11:36 AM | |
3456 | 05-24-2016 05:27 PM |
03-14-2016
09:56 AM
@banuradha ganapathiappan Login as admin user and try the same operation. tail -f /var/log/ambari-server/ambari-server.log --> This can give you more information
... View more
03-14-2016
09:45 AM
1 Kudo
@Sunile Manjee Are you able to run select count from beeline?
... View more
03-14-2016
09:40 AM
3 Kudos
@Viraj Vekaria See this thread https://community.hortonworks.com/questions/402/how-to-setup-high-availability-for-ambari-server.html
... View more
03-13-2016
01:58 PM
1 Kudo
@Tobias Müller Use this link for HDP 2.3.2 sandbox. The description shows HDP 2.3.0 but download file is 2.3.2 http://hortonworks.com/products/releases/hdp-2-3/#install
... View more
03-12-2016
11:24 AM
1 Kudo
@Rushikesh Deshmukh See this https://sqoop.apache.org/docs/1.4.2/SqoopUserGuide.html#_incremental_imports Sqoop provides an incremental import mode which can be used to retrieve only rows newer than some previously-imported set of rows. The following arguments control incremental imports: Table 4. Incremental import arguments: Argument Description --check-column (col) Specifies the column to be examined when determining which rows to import. --incremental (mode) Specifies how Sqoop determines which rows are new. Legal values for mode include append and lastmodified . --last-value (value) Specifies the maximum value of the check column from the previous import. Sqoop supports two types of incremental imports: append and lastmodified . You can use the --incremental argument to specify the type of incremental import to perform. You should specify append mode when importing a table where new rows are continually being added with increasing row id values. You specify the column containing the row’s id with --check-column . Sqoop imports rows where the check column has a value greater than the one specified with --last-value . An alternate table update strategy supported by Sqoop is called lastmodified mode. You should use this when rows of the source table may be updated, and each such update will set the value of a last-modified column to the current timestamp. Rows where the check column holds a timestamp more recent than the timestamp specified with --last-value are imported. At the end of an incremental import, the value which should be specified as --last-value for a subsequent import is printed to the screen. When running a subsequent import, you should specify --last-value in this way to ensure you import only the new or updated data. This is handled automatically by creating an incremental import as a saved job, which is the preferred mechanism for performing a recurring incremental import. See the section on saved jobs later in this document for more information.
... View more
03-12-2016
11:14 AM
@Michael Dennis Uanang You have to see this https://kafka.apache.org/082/ops.html ISR shrink rate kafka.server:type=ReplicaManager,name=IsrShrinksPerSec If a broker goes down, ISR for some of the partitions will shrink. When that broker is up again, ISR will be expanded once the replicas are fully caught up. Other than that, the expected value for both ISR shrink rate and expansion rate is 0.
... View more
03-12-2016
10:29 AM
2 Kudos
@Rushikesh Deshmukh Look at this explanation https://pig.apache.org/docs/r0.7.0/piglatin_ref2.html#Flatten+Operator The FLATTEN operator looks like a UDF syntactically, but it is actually an operator that changes the structure of tuples and bags in a way that a UDF cannot. Flatten un-nests tuples as well as bags. The idea is the same, but the operation and result is different for each type of structure.
... View more
03-12-2016
12:49 AM
@Prakash Punj What's the purpose of Region Server In HBase the slaves are called Region Servers. Each Region Server is responsible to serve a set of regions, and one Region (i.e. range of rows) can be served only by one Region Server. Where is should be located ? Every datanode ? Yes What's the purpose of HBase Master HBase Master coordinates the HBase Cluster and is responsible for administrative operations. HBase is NoSql database. What does it store ? HDFS See this http://www.slideshare.net/xefyr/h-base-for-architectspptx
... View more
03-07-2016
12:00 PM
@Pradeep kumar See this thread https://community.hortonworks.com/questions/21212/configure-storage-capacity-of-hadoop-cluster.html Read the comments under best answer
... View more