Member since
07-26-2016
49
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2681 | 11-28-2017 09:54 PM |
11-30-2017
09:44 PM
Hive version : Hive 1.1.0-cdh5.13.0, Still i didnt get the solution...
... View more
11-29-2017
12:56 AM
Hi Tristanzajonc, Thanks for the explanation. I am using cdsw for testing only. So, i choosen the domain service with extention of xip.io But , while i am launching the session it displaying as below : It stays in schedulling phase and Terminal also not opening in client side. Firewall restriction also i have done. Please guide me Thanks.
... View more
11-28-2017
09:54 PM
1 Kudo
After updating the ip address in cloudera manager level. ned to do the following. #cdsw reset Next Intialize the CDSW #cdsw init Automaticly the IP address will update. Thanks.
... View more
11-28-2017
09:38 PM
Hi Team, I have configured CDSW 1.2 on Cloudera Name node server. Unfortunalty the IP address are changed, Due to dynamic ip problem. I have Updated the Ip addressess as per link for cloudera manager. http://bigdatathinker.blogspot.in/2013/12/cloudera-manager-update-hostname-or-ip.html But now i want to update the ip address for CDSW. Please guide me. Thanks.
... View more
Labels:
11-21-2017
12:53 AM
command : sudo -u hdfs hdfs fsck -list-corruptfileblocks For deleting corrupted files : sudo -u hdfs hdfs fsck / -delete Thanks.
... View more
09-11-2017
10:06 PM
Please check the hive-site.xml file and Guide me. <configuration> <property> <name>hive.metastore.uris</name> <value>thrift://quickstart.cloudera:9083</value> </property> <property> <name>hive.metastore.client.socket.timeout</name> <value>300</value> </property> <property> <name>hive.metastore.warehouse.dir</name> <value>/user/hive/warehouse</value> </property> <property> <name>hive.warehouse.subdir.inherit.perms</name> <value>true</value> </property> <property> <name>hive.auto.convert.join</name> <value>true</value> </property> <property> <name>hive.auto.convert.join.noconditionaltask.size</name> <value>20971520</value> </property> <property> <name>hive.optimize.bucketmapjoin.sortedmerge</name> <value>false</value> </property> <property> <name>hive.smbjoin.cache.rows</name> <value>10000</value> </property> <property> <name>hive.server2.logging.operation.enabled</name> <value>true</value> </property> <property> <property> <name>hive.server2.logging.operation.log.location</name> <value>/var/log/hive/operation_logs</value> </property> <property> <name>mapred.reduce.tasks</name> <value>-1</value> </property> <property> <name>hive.exec.reducers.bytes.per.reducer</name> <value>67108864</value> </property> <property> <name>hive.exec.copyfile.maxsize</name> <value>33554432</value> </property> <property> <name>hive.exec.reducers.max</name> <value>1099</value> </property> <property> <name>hive.vectorized.groupby.checkinterval</name> <value>4096</value> </property> <property> <name>hive.vectorized.groupby.flush.percent</name> <value>0.1</value> </property> <property> <name>hive.compute.query.using.stats</name> <value>false</value> </property> <property> <name>hive.vectorized.execution.enabled</name> <value>true</value> </property> <property> <property> <name>hive.vectorized.execution.reduce.enabled</name> <value>false</value> </property> <property> <name>hive.merge.mapfiles</name> <value>true</value> </property> <property> <name>hive.merge.mapredfiles</name> <value>false</value> </property> <property> <name>hive.cbo.enable</name> <value>false</value> </property> <property> <name>hive.fetch.task.conversion</name> <value>minimal</value> </property> <property> <name>hive.fetch.task.conversion.threshold</name> <value>268435456</value> </property> <property> <name>hive.limit.pushdown.memory.usage</name> <value>0.1</value> </property> <property> <name>hive.merge.sparkfiles</name> <value>true</value> </property> <property> <name>hive.merge.smallfiles.avgsize</name> <value>16777216</value> </property> <property> <name>hive.merge.size.per.task</name> <value>268435456</value> </property> <property> <name>hive.optimize.reducededuplication</name> <value>true</value> </property> <property> <name>hive.optimize.reducededuplication.min.reducer</name> <value>4</value> </property> <property> <name>hive.map.aggr</name> <value>true</value> </property> <property> <name>hive.map.aggr.hash.percentmemory</name> <value>0.5</value> </property> <property> <name>hive.optimize.sort.dynamic.partition</name> <value>false</value> </property> <property> <name>hive.execution.engine</name> <value>mr</value> </property> <property> <name>spark.executor.memory</name> <value>52428800</value> </property> <property> <name>spark.driver.memory</name> <value>52428800</value> </property> <property> <name>spark.executor.cores</name> <value>1</value> </property> <property> <property> <name>spark.yarn.driver.memoryOverhead</name> <value>64</value> </property> <property> <name>spark.yarn.executor.memoryOverhead</name> <value>64</value> </property> <property> <name>spark.dynamicAllocation.enabled</name> <value>true</value> </property> <property> <name>spark.dynamicAllocation.initialExecutors</name> <value>1</value> </property> <property> <name>spark.dynamicAllocation.minExecutors</name> <value>1</value> </property> <property> <name>spark.dynamicAllocation.maxExecutors</name> <value>2147483647</value> </property> <property> <name>hive.metastore.execute.setugi</name> <value>true</value> </property> <property> <name>hive.support.concurrency</name> <value>true</value> </property> <property> <name>hive.zookeeper.quorum</name> <value>quickstart.cloudera</value> </property> <property> <property> <name>hive.zookeeper.client.port</name> <value>2181</value> </property> <property> <name>hbase.zookeeper.quorum</name> <value>quickstart.cloudera</value> </property> <property> <name>hbase.zookeeper.property.clientPort</name> <value>2181</value> </property> <property> <name>hive.zookeeper.namespace</name> <value>hive_zookeeper_namespace_hive</value> </property> <property> <name>hive.cluster.delegation.token.store.class</name> <value>org.apache.hadoop.hive.thrift.MemoryTokenStore</value> </property> <property> <name>hive.server2.enable.doAs</name> <value>true</value> </property> <property> <name>hive.server2.use.SSL</name> <value>false</value> </property> <property> <name>spark.shuffle.service.enabled</name> <value>true</value> </property> </configuration> Thanks, Syam.
... View more
09-11-2017
09:47 PM
Kudu is like HBase..
... View more
09-11-2017
05:11 AM
Thanks for the reply. I created the ORC format table only. You can see the details in first post. Apache Kudu is like hive ? Thanks, Syam.
... View more
09-11-2017
04:36 AM
Hi Ujjwal, Thanks for the reply. This is quickstart Vm machine 5.8. Uninstalling is not the solution i think. Thanks, Syam
... View more
07-27-2017
06:32 AM
Hi frdz, I have table in Hive (110GB Size) . I want to fetch one particular record based on unique Id amoung 110GB data. to see the comparision between Hive and Imapla . My system Config : Ram - 50GB, HDD - 1TB, CDH5.8, Quickstart VM. So, I ran a select querry in Impala in hue , but it fetched upto 6.7GB data and throws an error timed out (code THRIFTSOCKET):None Same thing if i run in Command line means , its working. So, Problem in Hue. How to fix ? Please guide . Thanks, Syam.
... View more
- Tags:
- hue
- impala
- QuickStart
07-24-2017
10:10 PM
Thanks for the reply Saranvisa, I already increased this configuration for some other reason, But no work of update and delete operations. Thanks, Syam.
... View more
07-13-2017
11:52 AM
Hi Harsha, I am also facing same error.. Not able to delete or update in hive table . create table testTableNew(id int ,name string ) clustered by (id) into 2 buckets stored as orc TBLPROPERTIES(' transactional '='true'); insert into table testTableNew values('101','syri'); select * from testtablenew; 1 102 syam 2 101 syri 3 101 syri delete from testTableNew where id = '101'; Error while compiling statement : FAILED : SemanticException [ Error 10294 ]: Attempt to do update or delete using transaction manager that does not support these operations . update testTableNew set name = praveen where id = 101; Error while compiling statement : FAILED : SemanticException [ Error 10294 ]: Attempt to do update or delete using transaction manager that does not support these operations . I have added few properties in hive-site.xml also : hive.support.concurrency true hive.enforce.bucketing true hive.exec.dynamic.partition.mode nonstrict hive.txn.manager org.apache.hadoop.hive.ql.lockmgr.DbTxnManager hive.compactor.initiator.on true hive.compactor.worker.threads 2 hive.in.test true After restart the Hive service also same error i am facing. Quick Start VM - 5.8 and Hive version - 1.1.0. Please guide me to sort this issue. Thanks, Syam.
... View more
07-13-2017
11:14 AM
Hi all, I am not able to delete or update in hive table . create table testTableNew(id int ,name string ) clustered by (id) into 2 buckets stored as orc TBLPROPERTIES('transactional'='true'); insert into table testTableNew values('101','syri'); select * from testtablenew; 1 102 syam 2 101 syri 3 101 syri delete from testTableNew where id = '101'; Error while compiling statement: FAILED: SemanticException [Error 10294]: Attempt to do update or delete using transaction manager that does not support these operations. update testTableNew set name = praveen where id = 101; Error while compiling statement: FAILED: SemanticException [Error 10294]: Attempt to do update or delete using transaction manager that does not support these operations. I have added few properties in hive-site.xml also : hive.support.concurrency true hive.enforce.bucketing true hive.exec.dynamic.partition.mode nonstrict hive.txn.manager org.apache.hadoop.hive.ql.lockmgr.DbTxnManager hive.compactor.initiator.on true hive.compactor.worker.threads 2 hive.in.test true After restart the Hive service also same error i am facing. Quick Start VM - 5.8 and Hive version - 1.1.0. Please guide me to sort this issue. Thanks, Syam.
... View more
Labels:
- Labels:
-
Hadoop Concepts
-
Hive
-
HiveOnSpark
-
Hue
-
Impala
-
Quickstart VM
05-30-2017
07:45 AM
Hi Team, I am having a Quickstart VMware Cloudera 5.8 System. In that I have installed R and R studio. I want to integrate R and Imapala. For that i done the below steps : $ sudo yum install unixODBC
$ sudo yum install unixODBC-devel $ yum --nogpgcheck localinstall ClouderaImpalaODBC-2.5.5.1005-1.el6.x86_64.rpm And copied the odbc.ini and cloudera.impalaodbc.ini files to the path = /etc/ odbc.ini HOST=quickstart.cloudera
PORT=21050
Database=default cloudera.impalaodbc.ini # SimbaDN / unixODBC
ODBCInstLib= libodbcinst.so In addition I defined the environment variables as follows: $ export LD_LIBRARY_PATH=/usr/local/lib:/opt/cloudera/impalaodbc/lib/64
$ export ODBCINI=/etc/ odbc.ini
$ export SIMBADN=/etc/ cloudera.impalaodbc.ini After that I opened R console and Packages installation done. $ R
>install.packages("RODBC") I am facing below error, while connecting Impala. Please guide to sort this issue. Thanks, Syam.
... View more
Labels:
05-30-2017
06:51 AM
Oh!! Will see, What update will Cloudera will give. Thanks for the reply.
... View more
05-30-2017
05:52 AM
Hi Vina, Thanks for the reply, R and R studio installed successfully. I want to add R studio service in Cloudera manager to Stop, start, restart, monitor etc. How much load it is taking etc activities ? Thanks, Syam.
... View more
05-30-2017
04:08 AM
Hi friendz, I am using Cloudera Quickstart 5.8 VMware. In the same machine we have installed R & R studio . All working fine. Now my question is : How to add R studio service in cloudera manager screen. Please guide me. Thanks, Syam.
... View more
Labels:
05-28-2017
11:52 PM
Okay, I will set the HDFS location while creating the table. Data vanishing from HDFS location, But in Hive location data is there. Thanks, Syam.
... View more
05-28-2017
11:39 PM
Hi Team, I have Quick start Cloudera 5.8 VM machine with good enough of Ram and HD. I want to install R and R studio in Quickstart Cloudera 5.8. My questions are : It is recomandable or not ? If recomandable please guide me with steps. Or if not recomandable guide me to access hive data to R studio . Thanks.
... View more
- Tags:
- rstudio
Labels:
05-28-2017
11:33 PM
Hi Mbigelow, Thanks for the reply. I have uploaded the falt file in HDFS location. (/user/clouder/QSM/) And i created a table as above and loaded the data. Data loaded successfully to hive. But I dont want to move data to Hive warehouse. Without vanishing data in HDFS. Hive results should come. Please guide me. Thanks, Syam.
... View more
05-24-2017
08:25 AM
Hi all, I am using Quickstart VM 5.8. I have loaded some flat files in HDFS . I have created external table in hive as below : CREATE External TABLE abc (ID int, Price double, Start_DTTM string, DEL_DT_TM string) row format delimited fields terminated by ',' stored as textfile; load data inpath '/user/cloudera/CPC/QSM/QSM_MarToApr2016.csv' into table abc; Data loaded successfully in Hive table. But in HDFS data is vanishing . Please suggest Thanks, Syam.
... View more
Labels:
- Labels:
-
Hadoop Concepts
-
HDFS
-
Hive
-
Impala
-
Quickstart VM
05-24-2017
08:20 AM
Hi Friendz, I have query in splitting the date and time field in hive table. My data is there in flat file in the below format : ID Price Start_DTTM DEL_DT_TM 101 200 01MAR2016:11:31:52 01MAR2016:11:59:59 102 300 04MAR2016:06:13:08 04MAR2016:09:19:07 104 500 03MAR2016:11:54:56 03MAR2016:15:56:34 105 800 03MAR2016:09:10:37 03MAR2016:09:52:03 I have created a table as below : CREATE TABLE abc (ID int, Price double, Start_DTTM string, DEL_DT_TM string) row format delimited fields terminated by ',' stored as textfile; And loaded data into the table. Now i want to spit the Date and time separately in different fields. Please guide me Thanks, Syam.
... View more
Labels:
05-09-2017
01:51 AM
Hi Team, I am having Quickstart VM CDH 5.8 system. Where I need to install Cloudera data science work bench 1.0 I am following the url - https://www.cloudera.com/documentation/data-science-workbench/latest/topics/cdsw_install.html While installing RPM using command : sudo yum install cloudera-data-science-workbenc I am facing below error Please guide me. Thanks, SS.
... View more
Labels:
- Labels:
-
Cloudera Data Science Workbench
07-28-2016
05:29 AM
Dear Romainr, Please share the HUE 4 User guide. Thanks, Syam.
... View more
07-27-2016
06:38 AM
Dear Gurus, Please share Cloudera HUE 3 User guide. Thanks, syam.
... View more