Member since
02-15-2016
113
Posts
7
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5131 | 07-11-2017 08:10 AM | |
2504 | 03-07-2016 03:03 PM |
05-23-2018
11:19 AM
database settings are correct , even i removed the oozie and added it back with new database /username/password and it was able to connect with mysql but its failing at time of creation of tables in mysql with error Wed May 23 14:11:22 EDT 2018
JAVA_HOME=/usr/java/jdk1.7.0_67-cloudera
using 5 as CDH_VERSION
using /var/lib/oozie/tomcat-deployment as CATALINA_BASE
CONF_DIR=/run/cloudera-scm-agent/process/196-oozie-OOZIE_SERVER
CMF_CONF_DIR=/etc/cloudera-scm-agent
Copying JDBC jar from /usr/share/java/mysql-connector-java.jar to /var/lib/oozie
ERROR: Oozie could not be started
REASON: org.apache.oozie.service.ServiceException: E0103: Could not load service classes, Could not load password for [oozie.service.JPAService.jdbc.password]
Stacktrace:
-----------------------------------------------------------------
org.apache.oozie.service.ServiceException: E0103: Could not load service classes, Could not load password for [oozie.service.JPAService.jdbc.password]
at org.apache.oozie.service.Services.loadServices(Services.java:309)
at org.apache.oozie.service.Services.init(Services.java:213)
at org.apache.oozie.servlet.ServicesLoader.contextInitialized(ServicesLoader.java:46)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4276)
at org.apache.catalina.core.StandardContext.start(StandardContext.java:4779)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:803)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:780)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583)
at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:944)
at org.apache.catalina.startup.HostConfig.deployWARs(HostConfig.java:779)
at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:505)
at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322)
at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325)
at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1069)
at org.apache.catalina.core.StandardHost.start(StandardHost.java:822)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1061)
at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463)
at org.apache.catalina.core.StandardService.start(StandardService.java:525)
at org.apache.catalina.core.StandardServer.start(StandardServer.java:761)
at org.apache.catalina.startup.Catalina.start(Catalina.java:595)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414)
Caused by: java.lang.IllegalArgumentException: Could not load password for [oozie.service.JPAService.jdbc.password]
at org.apache.oozie.service.ConfigurationService.getPassword(ConfigurationService.java:598)
at org.apache.oozie.service.ConfigurationService.getPassword(ConfigurationService.java:585)
at org.apache.oozie.service.JPAService.init(JPAService.java:160)
at org.apache.oozie.service.Services.setServiceInternal(Services.java:386)
at org.apache.oozie.service.Services.setService(Services.java:372)
at org.apache.oozie.service.Services.loadServices(Services.java:305)
... 26 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.oozie.service.ConfigurationService.getPassword(ConfigurationService.java:591)
... 31 more
Caused by: java.io.IOException: Configuration problem with provider path.
at org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:2118)
at org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:2037)
... 36 more
Caused by: java.io.IOException: Bad configuration of hadoop.security.credential.provider.path at localjceks://file/{{CMF_CONF_DIR}}/creds.localjceks
at org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:86)
at org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:2098)
... 37 more
Caused by: java.net.URISyntaxException: Illegal character in path at index 18: localjceks://file/{{CMF_CONF_DIR}}/creds.localjceks
at java.net.URI$Parser.fail(URI.java:2829)
at java.net.URI$Parser.checkChars(URI.java:3002)
at java.net.URI$Parser.parseHierarchical(URI.java:3086)
at java.net.URI$Parser.parse(URI.java:3034)
at java.net.URI.<init>(URI.java:595)
at org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:67)
... 38 more
-----------------------------------------------------------------
... View more
05-22-2018
09:38 PM
Hi , I have created a cluster in aws but service which connect to mysql database like oozie /hive metastore /hue are not starting since they are not able to connect with mysql . i am getting below error , not sure what is wrong with cred files RROR: Oozie could not be started
REASON: org.apache.oozie.service.ServiceException: E0103: Could not load service classes, Could not load password for [oozie.service.JPAService.jdbc.password]
Stacktrace:
-----------------------------------------------------------------
org.apache.oozie.service.ServiceException: E0103: Could not load service classes, Could not load password for [oozie.service.JPAService.jdbc.password]
at org.apache.oozie.service.Services.loadServices(Services.java:309)
at org.apache.oozie.service.Services.init(Services.java:213)
at org.apache.oozie.servlet.ServicesLoader.contextInitialized(ServicesLoader.java:46)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4276)
at org.apache.catalina.core.StandardContext.start(StandardContext.java:4779)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:803)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:780)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583)
at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:944)
at org.apache.catalina.startup.HostConfig.deployWARs(HostConfig.java:779)
at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:505)
at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322)
at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325)
at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1069)
at org.apache.catalina.core.StandardHost.start(StandardHost.java:822)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1061)
at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463)
at org.apache.catalina.core.StandardService.start(StandardService.java:525)
at org.apache.catalina.core.StandardServer.start(StandardServer.java:761)
at org.apache.catalina.startup.Catalina.start(Catalina.java:595)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414)
Caused by: java.lang.IllegalArgumentException: Could not load password for [oozie.service.JPAService.jdbc.password]
at org.apache.oozie.service.ConfigurationService.getPassword(ConfigurationService.java:598)
at org.apache.oozie.service.ConfigurationService.getPassword(ConfigurationService.java:585)
at org.apache.oozie.service.JPAService.init(JPAService.java:160)
at org.apache.oozie.service.Services.setServiceInternal(Services.java:386)
at org.apache.oozie.service.Services.setService(Services.java:372)
at org.apache.oozie.service.Services.loadServices(Services.java:305)
... 26 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.oozie.service.ConfigurationService.getPassword(ConfigurationService.java:591)
... 31 more
Caused by: java.io.IOException: Configuration problem with provider path.
at org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:2118)
at org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:2037)
... 36 more
Caused by: java.io.IOException: Bad configuration of hadoop.security.credential.provider.path at localjceks://file/{{CMF_CONF_DIR}}/creds.localjceks
at org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:86)
at org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:2098)
... 37 more
Caused by: java.net.URISyntaxException: Illegal character in path at index 18: localjceks://file/{{CMF_CONF_DIR}}/creds.localjceks
at java.net.URI$Parser.fail(URI.java:2829)
at java.net.URI$Parser.checkChars(URI.java:3002)
at java.net.URI$Parser.parseHierarchical(URI.java:3086)
at java.net.URI$Parser.parse(URI.java:3034)
at java.net.URI.<init>(URI.java:595)
at org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:67)
... 38 more
-----------------------------------------------------------------
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Oozie
04-16-2018
12:16 PM
Hi ,
I have configured impala with haproxy for load balancing ,also impala is configured with ldap .
but i am not able to connect impala using ldap .
this is haproxy ==> all wcc9hddn01.prod.com:21051
beeline> !connect "jdbc:hive2://allwcc9hddn01.prod.com:21051/default;user=abcd;password=abcd" Connecting to jdbc:hive2://allwcc9hddn01.prod.com:21051/default;user=abcd;password=abcd Unexpected end of file when reading from HS2 server. The root cause might be too many concurrent connections. Please ask the administrator to check the number of active connections, and adjust hive.server2.thrift.max.worker.threads if applicable. Error: Could not open client transport with JDBC Uri: jdbc:hive2:// all wcc9hddn01.prod.com:21051/ default ;user= abcd ;password= abcd : null (state=08S01,code=0)
whereas kerberos works fine . what could be the reason ?
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Impala
-
Kerberos
04-16-2018
12:11 PM
user mapred:hadoop for logs in /tmp
... View more
04-02-2018
08:41 AM
Hi , do we have a coomand to check kudu table size. like hive and hbase table we can check size on hdfs . do we have similar thing for kudu
... View more
Labels:
- Labels:
-
Apache Kudu
04-02-2018
08:27 AM
Thanks alex. yeah i used distcp for hbase files as well and copied to s3. i was also able to restore that backup another cluster but you need to repair meta offline . my requirement was to copy everything to s3 and spin off ec2 instance and than later restore data from s3 backup if require. Export hbase table is also a good option but than you need to have a enough space on /tmp since export copy tables locally before copying it to s3. we ran put of space with this method so decided to copy files.
... View more
03-14-2018
09:10 AM
ok , this is because of jar files which was deployed in aux path . these jar was complied with old hadoop version and forcing beeline to load configuration from there. you can remove these jars or complied them against new hadoop version
... View more
03-12-2018
09:02 PM
did you find the solution for this ? anyone know about this issue
... View more
03-11-2018
10:21 PM
Looks like this is resloved in hadoop 2.8.0 ( not sure though) check this ==.> https://github.com/minio/minio/issues/2965 only workaround i found is first load data without encryption and then enable encryption on file copied in S3 (manually) . btw i have a general question . this SSE is to portect data in S3 only ,what about if someone with aws admin role download data to local disk ,its not more encrypted data
... View more
03-07-2018
06:38 PM
Hi ,
I want to take cluster backup to S3 and then wipe out the cluster ,may be later spin it again .
for taking backup to s3 what is the best apporoach .
1- copy the entire hdfs content to S3 ( in single bucket or create mutiple bucket)
2- do i need to take hbase snapshot along with /hbase copy to s3 or either one will work.
3- REST encryption is enabled ,any special consideration to take while moving backup to S3
4- and how restore will work from S3 , just copy to hdfs
... View more
Labels:
- Labels:
-
Apache HBase
-
HDFS
02-06-2018
09:44 AM
Hi , Is there any probnlem in 5.14 mirror ,dwonload are getting fail frequently https://archive.cloudera.com/cm5/redhat/6/x86_64/cm/5.14.0/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 503" Trying other mirror. Is this ok [y/N]: y Downloading Packages: https://archive.cloudera.com/cm5/redhat/6/x86_64/cm/5.14.0/RPMS/x86_64/cloudera-manager-daemons-5.14.0-1.cm5140.p0.25.el6.x86_64.rpm: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 503" Trying other mirror. Error Downloading Packages: cloudera-manager-daemons-5.14.0-1.cm5140.p0.25.el6.x86_64: failure: RPMS/x86_64/cloudera-manager-daemons-5.14.0-1.cm5140.p0.25.el6.x86_64.rpm from cloudera-manager: [Errno 256] No more mirrors to try.
... View more
Labels:
- Labels:
-
Cloudera Manager
-
Manual Installation
01-23-2018
11:53 AM
Let me know what info you need , i can share .
... View more
01-17-2018
01:45 PM
guys any thought /suggestion on this
... View more
09-29-2017
08:32 AM
let me explain with example top - 10:57:50 up 58 days, 37 min, 7 users, load average: 257.91, 256.55, 253.75 Tasks: 1322 total, 1 running, 1302 sleeping, 0 stopped, 19 zombie %Cpu(s): 0.1 us, 0.2 sy, 0.0 ni, 82.3 id, 17.5 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem : 26370188+total, 12234492 free, 51030360 used, 20043704+buff/cache KiB Swap: 8388604 total, 5778116 free, 2610488 used. 21161840+avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 39762 root 39 19 1411748 35056 3712 S 6.0 0.0 95:15.54 receptor 16806 impala 20 0 46.612g 5.321g 28840 S 1.0 2.1 732:22.32 impalad 22047 root 20 0 158908 3472 1540 R 1.0 0.0 0:00.08 top 12285 hbase 20 0 26.438g 0.018t 13132 S 0.7 7.5 1775:40 java 65 root 20 0 0 0 0 S 0.3 0.0 58:05.24 rcuos/14 11525 yarn 20 0 4991204 1.977g 14540 S 0.3 0.8 35206:02 java 12416 hbase 20 0 114580 2156 692 S 0.3 0.0 218:57.29 hbase.sh 17528 complia+ 20 0 3182376 843428 27068 S 0.3 0.3 8:00.97 java 17888 complia+ 20 0 3181380 842192 26796 S 0.3 0.3 8:35.77 java 20861 complia+ 20 0 3195300 882332 26860 S 0.3 0.3 8:51.63 java 32800 hive 20 0 3448532 1.047g 25244 S 0.3 0.4 6:42.25 java this server hot hung with more load . since this one of the impalad server ,now imapad is not working on this server which cause whole impala service to hang . i am thinking why issue in one impalad server causing whole impala service to hang . after restarting this hung server impala works fine .
... View more
09-26-2017
10:19 AM
Hi, does impala support automatic metadata update from hive (DDL/DML) ? is this available in any version
... View more
Labels:
- Labels:
-
Apache Impala
09-22-2017
05:33 AM
Hi , I am trying to understand this situation . Lets say i have 5 impalad sever and if anyone one them is down impala queries will get timeout error or will not execute . what is the reason behind this ?
... View more
Labels:
- Labels:
-
Apache Impala
07-22-2017
08:25 PM
HI, I am trying to authentocate hadoop client on windows server with linunx kdc . can anyone provide a document or link for such integration hadoop client - windows server 2012 kdc - linux hadoop cluster - linux
... View more
07-11-2017
10:20 AM
at first login it should ask username/password to create new account but i am not getting that option . i am gettinig login page .
... View more
07-11-2017
08:10 AM
What port cdsw web url run? # # This domain for DNS and is unrelated to Kerberos or LDAP domains. DOMAIN="cdsw.company.com" # IPv4 address for the master node that is reachable from the worker nodes. # # Within an AWS VPC, MASTER_IP should be set to the internal IP # of the master node; for instance, "10.251.50.12" corresponding to # master node name of ip-10-251-50-12.ec2.internal. MASTER_IP="10.11.140.64" DOMAIN="cdsw.company.com shall i put my company domain like cdsw.test.com or just test.com
... View more
07-10-2017
08:51 PM
ok after coiple of troubleshooting it moved and completed Cloudera Data Science Workbench is not ready yet: some system pods are not ready Master node configuration successful. The application may take up to 10 minutes to initially startup. To check application status use: $ watch cdsw status but Every 2.0s: cdsw status Mon Jul 10 23:49:23 2017 Cloudera Data Science Workbench Status Service Status docker: active kubelet: active nfs: active Checking kernel parameters... Node Status Cloudera Data Science Workbench is not ready yet: kubectl command failed [root@docker ~]# kubectl get pods --show-all NAME READY STATUS RESTARTS AGE cron-2934152315-mqsei 1/1 Running 0 20m db-39862959-2gt3u 1/1 Running 0 20m db-migrate-052787a-qhp19 0/1 Completed 0 20m engine-deps-xfrsh 1/1 Running 0 20m ingress-controller-3138093376-eacx8 1/1 Running 0 20m livelog-1900214889-e8oys 1/1 Running 0 20m reconciler-459456250-t057g 1/1 Running 0 20m spark-port-forwarder-guxvb 1/1 Running 0 20m web-3826671331-1fp2r 1/1 Running 0 20m web-3826671331-66myb 1/1 Running 0 20m web-3826671331-z8ark 1/1 Running 0 20m is it still downloading the images ?
... View more
07-10-2017
06:41 PM
ok , i have reset the cdsw and started again but this time it stuck with <master/pki> created keys and certificates in "/etc/kubernetes/pki" <util/kubeconfig> created "/etc/kubernetes/kubelet.conf" <util/kubeconfig> created "/etc/kubernetes/admin.conf" <master/apiclient> created API client configuration <master/apiclient> created API client, waiting for the control plane to become ready <master/apiclient> all control plane components are healthy after 18.555501 seconds <master/apiclient> waiting for at least one node to register and become ready. proxy i checked and looks ok. kubectl get pods does not return anything
... View more
07-10-2017
02:32 PM
It moved after changing proxy setting and now waiting in infinite loop <master/addons> created essential addon: kube-proxy <master/addons> created essential addon: kube-dns Kubernetes master initialised successfully! You can now join any number of machines by running the following on each node: kubeadm join --token=2a582a.63ec0427495ec31c 10.17.160.64 Added bootstrap token KUBE_TOKEN to /etc/cdsw/config/cdsw.conf node "docker.test.com" tainted daemonset "weave-net" created Waiting for kube-system cluster to come up. This could take a few minutes... Some pods in kube-system have not yet started. This may take a few minutes. Waiting for 10 seconds before checking again... Some pods in kube-system have not yet started. This may take a few minutes. Waiting for 10 seconds before checking again... Some pods in kube-system have not yet started. This may take a few minutes. Waiting for 10 seconds before checking again... Some pods in kube-system have not yet started. This may take a few minutes. Waiting for 10 seconds before checking again... Some pods in kube-system have not yet started. This may take a few minutes. Waiting for 10 seconds before checking again... Some pods in kube-system have not yet started. This may take a few minutes. Waiting for 10 seconds before checking again... Some pods in kube-system have not yet started. This may take a few minutes. Waiting for 10 seconds before checking again... Some pods in kube-system have not yet started. This may take a few minutes. Waiting for 10 seconds before checking again... Some pods in kube-system have not yet started. This may take a few minutes. Waiting for 10 seconds before checking again... Some pods in kube-system have not yet started. This may take a few minutes. Waiting for 10 seconds before checking again... Some pods in kube-system have not yet started. This may take a few minutes. Waiting for 10 seconds before checking again... Some pods in kube-system have not yet started. This may take a few minutes. Waiting for 10 seconds before checking again...
... View more
07-10-2017
01:18 PM
did you find any solution to this error . i am also getting same .
... View more
07-10-2017
12:43 PM
No [root@docker ~]# docker pull gcr.io/google_containers/pause-amd64:3.0 Error response from daemon: Get https://gcr.io/v1/_ping: authenticationrequired 5:21:54.693338443-04:00" level=error msg="Handler for GET /images/gcr.io/google_containers/pause-amd64:3.0/json returned error: No such image: 15:21:55.693601137-04:00" level=error msg="Handler for GET /images/gcr.io/google_containers/pause-amd64:3.0/json returned error: No such image: 15:21:55.955985334-04:00" level=warning msg="Error getting v2 registry: Get https://gcr.io/v2/: authenticationrequired" 15:21:55.956065010-04:00" level=error msg="Attempting next endpoint for pull after error: Get https://gcr.io/v2/: authenticationrequired" 15:21:56.000584332-04:00" level=error msg="Attempting next endpoint for pull after error: Get https://gcr.io/v1/_ping: authenticationrequired" 15:21:56.000652326-04:00" level=error msg="Handler for POST /images/create returned error: Get https://gcr.io/v1/_ping: authenticationrequired" 15:21:58.396828751-04:00" level=error msg="Handler for GET /images/gcr.io/google_containers/pause-amd64:3.0/json returned error: No such image: 15:21:58.696481059-04:00" level=error msg="Handler for GET /images/gcr.io/google_containers/pause-amd64:3.0/json returned error: No such image: 15:22:06.695327527-04:00" level=error msg="Handler for GET /images/gcr.io/google_containers/pause-amd64:3.0/json returned error: No such image: 15:22:07.692817530-04:00" level=error msg="Handler for GET /images/gcr.io/google_containers/pause-amd64:3.0/json returned error: No such image: 15:22:13.393313446-04:00" level=error msg="Handler for GET /images/gcr.io/google_containers/pause-amd64:3.0/json returned error: No such image: 15:22:14.693889069-04:00" level=error msg="Handler for GET /images/gcr.io/google_containers/pause-amd64:3.0/json returned error: No such image: 15:22:19.693560635-04:00" level=error msg="Handler for GET /images/gcr.io/google_containers/pause-amd64:3.0/json returned error: No such image: 15:22:22.693515008-04:00" level=error msg="Handler for GET /images/gcr.io/google_containers/pause-amd64:3.0/json returned error: No such image: 15:22:22.787636243-04:00" level=warning msg="Error getting v2 registry: Get https://gcr.io/v2/: authenticationrequired" 15:22:22.787671995-04:00" level=error msg="Attempting next endpoint for pull after error: Get https://gcr.io/v2/: authenticationrequired" 15:22:22.830792991-04:00" level=error msg="Attempting next endpoint for pull after error: Get https://gcr.io/v1/_ping: authenticationrequired" 15:22:22.830856352-04:00" level=error msg="Handler for POST /images/create returned error: Get https://gcr.io/v1/_ping: authenticationrequired" but if it is able to pull other images then why failing for this one REPOSITORY TAG IMAGE ID CREATED SIZE docker.repository.cloudera.com/cdsw/1.0.1/third-party/weaveexec 1.9.0 300f92429697 5 months ago 90.4 MB
... View more
07-10-2017
11:17 AM
14:15:54.742150498-04:00" level=warning msg="Error getting v2 registry: Get https://gcr.io/v2/: authenticationrequired" 14:15:54.742216009-04:00" level=error msg="Attempting next endpoint for pull after error: Get https://gcr.io/v2/: authenticationrequired" what authentication it is looking for ?
... View more
07-10-2017
11:14 AM
after putting proxy setting it moved and now stuck at <util/kubeconfig> created "/etc/kubernetes/admin.conf" <master/apiclient> created API client configuration <master/apiclient> created API client, waiting for the control plane to become ready [root@docker ~]# systemctl status docker ● docker.service - docker Loaded: loaded (/etc/systemd/system/docker.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2017-07-10 14:04:12 EDT; 7min ago Docs: https://docs.docker.com Main PID: 20705 (dockerd) Memory: 51.1M CGroup: /system.slice/docker.service ├─20705 dockerd --log-driver=journald -s devicemapper --storage-opt dm.basesize=100G --storage-opt dm.thinpooldev=/dev/mapper... └─20720 docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 --start-timeout... journalctl -u docker 10T14:12:06.695585728-04:00" level=error msg="Handler for GET /images/gcr.io/google_containers/pause-amd64:3.0/json returned error: No such image: gcr.io/google 10T14:12:07.695941688-04:00" level=error msg="Handler for GET /images/gcr.io/google_containers/pause-amd64:3.0/json returned error: No such image: gcr.io/google 10T14:12:12.395203820-04:00" level=error msg="Handler for GET /images/gcr.io/google_containers/pause-amd64:3.0/json returned error: No such image: gcr.io/google 10T14:12:12.695987744-04:00" level=error msg="Handler for GET /images/gcr.io/google_containers/pause-amd64:3.0/json returned error: No such image: gcr.io/google 10T14:12:17.694514892-04:00" level=error msg="Handler for GET /images/gcr.io/google_containers/pause-amd64:3.0/json returned error: No such image: gcr.io/google 10T14:12:20.696049524-04:00" level=error msg="Handler for GET /images/gcr.io/google_containers/pause-amd64:3.0/json returned error: No such image: gcr.io/google 10T14:12:25.393880714-04:00" level=error msg="Handler for GET /images/gcr.io/google_containers/pause-amd64:3.0/json returned error: No such image: gcr.io/google 10T14:12:27.695855879-04:00" level=error msg="Handler for GET /images/gcr.io/google_containers/pause-amd64:3.0/json returned error: No such image: gcr.io/google 10T14:12:30.695721595-04:00" level=error msg="Handler for GET /images/gcr.io/google_containers/pause-amd64:3.0/json returned error: No such image: gcr.io/google 10T14:12:32.695268088-04:00" level=error msg="Handler for GET /images/gcr.io/google_containers/pause-amd64:3.0/json returned error: No such image: gcr.io/google 10T14:12:37.392576223-04:00" level=error msg="Handler for GET /images/gcr.io/google_containers/pause-amd64:3.0/json returned error: No such image: gcr.io/google [root@docker ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE docker.repository.cloudera.com/cdsw/1.0.1/third-party/weaveexec 1.9.0 300f92429697 5 months ago 90.4 MB
... View more
07-10-2017
10:46 AM
Hi , i am getting error while initiating cdsw [root@docker ~]# cdsw init Using user-specified config file: /etc/cdsw/config/cdsw.conf Prechecking OS Version........[OK] Prechecking scaling limits for processes........ WARNING: Cloudera Data Science Workbench recommends that all users have a max-user-processes limit of at least 65536. It is currently set to [65535] as per 'ulimit -u' Press enter to continue Prechecking scaling limits for open files........ WARNING: Cloudera Data Science Workbench recommends that all users have a max-open-files limit set to 1048576. It is currently set to [65535] as per 'ulimit -n' Press enter to continue Prechecking that iptables are not configured........[OK] Prechecking that SELinux is disabled........[OK] Prechecking configured block devices and mountpoints........[OK] Prechecking kernel parameters........[OK] Prechecking that docker block devices are of adequate size........[OK] Prechecking that application block devices are of adequate size........[OK] Prechecking size of root volume........ WARNING: The recommended minimum root volume size is 100G. Press enter to continue Prechecking that CDH gateway roles are configured........[OK] Prechecking that /etc/krb5 file is not a placeholder........[OK] Prechecking parcel paths........[OK] Prechecking CDH client configurations........[OK] Prechecking Java version........[OK] Prechecking Java distribution........[OK] Creating docker thinpool if it does not exist Volume group "docker" not found Cannot process volume group docker Unmounting /dev/mapper/data01-data01 umount: /dev/mapper/data01-data01: not mounted Removing Docker volume groups. Volume group "docker" not found Cannot process volume group docker Volume group "docker" not found Cannot process volume group docker Cleaning up docker directories... Wiping ext4 signature on /dev/mapper/data01-data01. Physical volume "/dev/data01/data01" successfully created Volume group "docker" successfully created Logical volume "thinpool" created. Logical volume "thinpoolmeta" created. WARNING: Converting logical volume docker/thinpool and docker/thinpoolmeta to pool's data and metadata volumes. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Converted docker/thinpool to thin pool. Logical volume "thinpool" changed. Initialize application storage at /var/lib/cdsw Disabling node with IP [10.11.160.64]... Node [10.11.160.64] removed from nfs export list successfully. Stopping rpc-statd... Stopping nfs-idmapd... Stopping rpcbind... Stopping nfs-server... Removing entry from /etc/fstab... Skipping format since volumes are already set correctly. Adding entry to /etc/fstab... Mounting [/var/lib/cdsw]... Starting rpc-statd... Enabling rpc-statd... Starting nfs-idmapd... Enabling nfs-idmapd... Starting rpcbind... Enabling rpcbind... Starting nfs-server... Enabling nfs-server... Enabling node with IP [10.11.160.64]... Node [10.11.160.64] added to nfs export list successfully. Starting rpc-statd... Enabling rpc-statd... Starting nfs-idmapd... Enabling nfs-idmapd... Starting rpcbind... Enabling rpcbind... Starting nfs-server... Enabling nfs-server... Starting docker... Enabling docker... Starting ntpd... Enabling ntpd... Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service. ERROR:: Unable to reset weave networking state.: 125
... View more
Labels:
07-05-2017
10:49 AM
it could be kerberos configuration issue on informatica server side. can you share more detail ?
... View more
07-04-2017
09:24 AM
Are you getting any error while connecting ? what kind of security is enabled for LDAP ?
... View more