Member since
02-11-2019
81
Posts
3
Kudos Received
0
Solutions
12-16-2019
02:10 PM
Hi Team,
Trying to follow the Tutorial in Cloudera Search deployment for CDH 6.2
Everything works until this command:
solrctl collection --create test_collection -s 2 -c test_collection_config
we get the exception:
{ "responseHeader":{ "status":0, "QTime":33610}, "failure":{ "gis18.example.com:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error from server at http://gis18.example.com:8983/solr: Error CREATEing SolrCore 'test_collection_shard1_replica_n1': Unable to create core [test_collection_shard1_replica_n1] Caused by: Permission denied: user=solr, access=WRITE, inode=\"/\":root:supergroup:drwxr-xr-x\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400)\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:256)\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:194)\n\tat org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1855)\n\tat org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1839)\n\tat org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1798)\n\tat org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3101)\n\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1123)\n\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:696)\n\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)\n\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)\n\tat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)\n\tat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)\n\tat java.security.AccessController.doPrivileged(Native Method)\n\tat javax.security.auth.Subject.doAs(Subject.java:422)\n\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)\n\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)\n",
Please help
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Cloudera Search
12-16-2019
12:23 PM
Hi,
New to Cloudera search.
How do I get the Solr.xml from zookeeper using solrctl tool or any other tools
Basically want to see the entire configuration for Solr service on the cluster
migrated cluster from 5.14 to 6.2
Cluster Info:
CDH 6.2
Lucene 7.4.0
... View more
Labels:
- Labels:
-
Apache Solr
-
Cloudera Search
10-05-2019
12:07 PM
So basically a situation where Hive is updating a table and impala clients are querying same table. Sometimes the impala queries throw missing hdfs files exception. How do we handle this
... View more
10-04-2019
09:42 PM
we have a beeline process that inserts data into a hive table
we can only access the table via impala views
so when the beeline job is done,
we want to refresh the tables and select rows from the table via impala views
could we write a user defined function that
1. refreshes the table in impala
2. selects * from the refreshed table
call that UDF from an impala view
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Impala
09-16-2019
02:07 PM
Is there a way to store intermediate query results in Beeline - Hive something like below in pseudo code. within a beeline session beeline -u -e" set var1 = select max(date) from tableA; set var2 = select count(emps) from tableB; set var3 = select min(date) from tableC; Insert into table TableD select var1, var2, var2 " --...
... View more
Labels:
- Labels:
-
Apache Hive
08-27-2019
10:24 AM
Using Squirell, I finally able to connect only with these additional JDBC URL settings: jdbc:mysql://localhost/db?useUnicode=true&useJDBCCompliantTimezoneShift=true&useLegacyDatetimeCode=false&serverTimezone=UTC How can I set those properties in CM for MySQL
... View more
08-26-2019
08:06 PM
I am able to connect from the localhost as sentry, root etc. without any issues does the password provided in the sentry configuration in CM need to be encrypted or something.? How can I confirm that the jdbc url is correct ? We just upgraded CM/CDH from 5.12 to 6.2
... View more
08-26-2019
07:50 AM
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:211)
at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:300)
... 19 more
+ NUM_TABLES='[ main] SqlRunner ERROR Error connecting to db with user '\''sentry'\'' and jdbcUrl '\''jdbc:mysql://localhost:3306/sentry?useUnicode=true&characterEncoding=UTF-8'\'''
+ [[ 1 -ne 0 ]]
+ echo 'Failed to count existing tables.'
+ exit 1
... View more
Labels:
08-06-2019
03:11 PM
Hi, I am getting above error from Hive while trying to query a table. This is coincidentally during an insert operation on the table... does it mean I cant access the table while a hive insert operation is ongoing...? table contains lots of rows partitioned by two columns
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Impala
07-06-2019
09:57 AM
Hi, I have a table with a lot of data, I want to create a new table based on some column values from this based which method is most efficient and cluster resources friendly Pseudo-Code 1. single job insert into myNewTable select * from myOldTable where a=xxx etc. 2. two jobs: job1. create datafame from select statement select * from myOldTable where a=xxx etc. as dataframe job2 write dataframe as new table insert into myNewTable select from dataframe
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache YARN
-
Cloudera Search