Member since
06-03-2019
59
Posts
21
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1156 | 04-11-2023 07:41 AM | |
7849 | 02-14-2017 10:34 PM | |
1382 | 02-14-2017 05:31 AM |
03-27-2017
02:03 AM
Hi My Phoenix table( "t1", column family "f1") has the following columns, pk - Primary key first_name varchar last_name varchar I want to create a view ("v1") like following which points to the above table. pk - Primary key fn varchar ln varchar I like to have the column names shorter when i create the views but not in the original table. When i query "select * from v1" , i should get the table "t1" data. Is there a way to achieve this in Phoenix ?
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Phoenix
03-17-2017
06:03 AM
2 Kudos
I have created the ER diagram for the complete list of Ambari database tables. It was taken from the Ambari 2.4.2.0 version. It might be helpful when troubleshooting ambari config issues. ambari-database-er-diagram.pdf
... View more
Labels:
03-14-2017
02:46 AM
1 Kudo
jline-sqlline-mysql-connector.tar.gz Step1: Download the attached tar file and untar it tar -xvf jline_sqlline__mysql_connector.tar Step2: Syntax: [root]# java -Djava.ext.dirs=/path/to/jline_sqlline__mysql_connector/ sqlline.SqlLine Example: [root@mi3 sqlline]# java -Djava.ext.dirs=/root/sqlline/jline-1.0/ sqlline.SqlLine sqlline version 1.0.2 by Marc Prud'hommeaux
Step3: Syntax: sqlline> !connect jdbc:mysql://<mysql hostname>:<port>/<db> <username> <password> Example: sqlline> !connect jdbc:mysql://hostabc:3306/hue hue hue Connecting to jdbc:mysql://mi1.openstacklocal:3306/hue Connected to: MySQL (version 5.1.73) Driver: MySQL-AB JDBC Driver (version mysql-connector-java-5.1.17-SNAPSHOT ( Revision: ${bzr.revision-id} )) Autocommit status: true Transaction isolation: TRANSACTION_REPEATABLE_READ 0: jdbc:mysql://mi1.openstacklocal:3306/hue>
... View more
Labels:
02-24-2017
08:22 AM
@Sachin Ambardekar Yes you can take explain plan for your hive query by running Syntax : EXPLAIN [EXTENDED|DEPENDENCY|AUTHORIZATION] query You can read more about hive tuning here, https://www.slideshare.net/HadoopSummit/how-to-understand-and-analyze-apache-hive-query-execution-plan-for-performance-debugging
... View more
02-22-2017
04:21 AM
3 Kudos
I have tried copying data between non-kerberized cluster to kerberized cluster using Snapshot diff, but it was failing with following error in HDP 2.5.3 Currently distcp only supports hdfs:// RPC protocol for snapshot based diff copy. If you use webhdfs either in source or target you will encouter this error. Command: $hadoop distcp -diff s1 s2 -update webhdfs://nonsecure_cluster:50070/source hdfs://secure_cluster:8020/target Error: java.lang.IllegalArgumentException: The FileSystems needs to be DistributedFileSystem for using snapshot-diff-based distcp at org.apache.hadoop.tools.DistCpSync.preSyncCheck(DistCpSync.java:86) at org.apache.hadoop.tools.DistCpSync.sync(DistCpSync.java:124) at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:180) at org.apache.hadoop.tools.DistCp.execute(DistCp.java:155) at org.apache.hadoop.tools.DistCp.run(DistCp.java:128) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.tools.DistCp.main(DistCp.java:462)
... View more
Labels:
02-17-2017
04:18 AM
@Jeyanth Kumar Kathiresan It could be a bug or network issue between hive server and namenode server. Try to start with hive server log and name node log, see if you can find any timeouts or errors during that time. If you found no issues with the logs, then work with Hortonworks support, it could be a bug. It
... View more
02-17-2017
04:08 AM
@Juan Castrilli The problem seems to be due to the error, Caused by: java.lang.ClassCastException: java.lang.String cannot be cast to java.lang.Boolean at It is trying to convert string data type to boolean, since you have mentioned any comparison operator with the column (newcolumn12) Query: select count(*) from test_mkt where newcolumn12 Can you try with a valid where clause condition for this column data type. ex: select count(*) from test_mkt where newcolumn12='xyz';
... View more
02-14-2017
10:47 PM
@Manish The database name is case sensitive. Can you change it to "ambaridatabase" in your /etc/ambari-server/conf/ambari.properties file and restart ambari server.
... View more
02-14-2017
10:34 PM
Hi It seems the postgres database with the name "AMBARIDATABASE" is not existing in your environment. Crosscheck your ambari server config file. Usually the db name will be "ambari" cat /etc/ambari-server/conf/ambari.properties | grep -i server.jdbc.database_name server.jdbc.database_name=ambari To verify the postgres database name available, [root@conf]# su - postgres -bash-4.1$ psql psql (8.4.20) Type "help" for help. postgres=# \list List of databases Name | Owner | Encoding | Collation | Ctype | Access privileges -----------+----------+----------+-------------+-------------+----------------------- ambari | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =Tc/postgres : postgres=CTc/postgres : ambari=CTc/postgres hive | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres : postgres=CTc/postgres template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres : postgres=CTc/postgres (5 rows)
... View more
02-14-2017
05:31 AM
1 Kudo
I too have noticed this problem with "drivers.csv". The problem i observed is, if anyfiles which is less than 4kb in size through ambari files view we are not able to upload. But you can upload the same through hdfs cli or with cURL commands. If you still wanted to use through ambari files view, just try to put more contents on the csv file to exceed 4 or 8kb in size. Let me know if that works.
... View more
- « Previous
- Next »