Member since
09-13-2017
17
Posts
2
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2936 | 03-23-2018 07:34 AM | |
24877 | 11-06-2017 05:38 AM |
08-02-2018
06:58 AM
David @David M., Actually, I'm looking to get the Impala logs with a querytext, start time, end time, memory, username, etc.... for tracking the user queries and to create the live dashboards like Cloudera Navigator but with free of cost. We have Spark or UDF to create the table from JSON in Hive. >>> df = sqlContext.read.json ("/user/venkata/lineage.json") >>> df.saveAsTable("secure.lineage") Thanks, Venkat
... View more
08-02-2018
05:28 AM
Saranvisa @saranvisa, Thanks for the reply. Not just for one time. Export is just manually thing but I would like to do streaming. In my cluster, the minimum Impala queries : 300queries/ 30mints. huge. So.... Thanks, Venkat
... View more
08-01-2018
12:23 PM
Kumar @Venkatesh_Kumar, How did you get the IMPALA query report? I'm trying to download the Impala queries, So Could you help me. Is there any location or way to download it? I found the /var/log/impalad/lineage/ location for logs but I'm not able to stream those logs to table. Thanks, Venkat.
... View more
07-18-2018
06:10 AM
Apply: INVALIDATE METADATA pgw.pgw_in; ALTER TABLE pgw.pgw_in RECOVER PARTITIONS; COMPUTE STATS pgw.pgw_in; (WARNING: The following tables are missing relevant table and/or column statistics.)
... View more
06-13-2018
01:04 PM
It's problem with hive server. Please run UPDATE or DELETE commands on a Client machine or update the below set commands on Hive-stie.xml file. SET hive.support.concurrency = true; SET hive.enforce.bucketing = true; SET hive.exec.dynamic.partition.mode = nonstrict; SET hive.txn.manager =org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; SET hive.compactor.initiator.on = true; SET hive.compactor.worker.threads = 1;
... View more
01-17-2018
12:54 PM
Thank you @saranvisa Thank you @Divyani
... View more
01-15-2018
06:11 AM
Hi, I have a problem with storage space in Cloudera. I'm using AWS services for PostgreSQL and AWS says free space is 96GB out of 100GB. Could anyone help to increase PostgreSQL database storage space for Cloudera Management Services? #CDH 5.10.1 #PostgreSQL 9.5.4 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.2 20140120 (Red Hat 4.8.2-16), 64-bit Thanks, Venkat.
... View more
Labels:
- Labels:
-
Cloudera Manager
11-07-2017
01:37 PM
1 Kudo
Hi,
When I'm trying to create a geometric consultation with ST_Point.
add jar esri-geometry-api-1.1-sources.jar; add jar esri-geometry-api-1.1.jar; add jar spatial-sdk-hadoop.jar; add jar spatial-sdk-hive.jar ; add jar spatial-sdk-json.jar;
create temporary function ST_AsText as 'com.esri.hadoop.hive.ST_AsText'; create temporary function ST_Intersects as 'com.esri.hadoop.hive.ST_Intersects'; create temporary function ST_Length as 'com.esri.hadoop.hive.ST_Length'; create temporary function ST_LineString as 'com.esri.hadoop.hive.ST_LineString'; create temporary function ST_Point as 'com.esri.hadoop.hive.ST_Point'; create temporary function ST_Polygon as 'com.esri.hadoop.hive.ST_Polygon'; create temporary function ST_SetSRID as 'com.esri.hadoop.hive.ST_SetSRID'; create temporary function st_geomfromtext as 'com.esri.hadoop.hive.ST_GeomFromText'; create temporary function st_geometrytype as 'com.esri.hadoop.hive.ST_GeometryType'; create temporary function st_asjson as 'com.esri.hadoop.hive.ST_AsJson'; create temporary function st_asbinary as 'com.esri.hadoop.hive.ST_AsBinary'; create temporary function st_x as 'com.esri.hadoop.hive.ST_X'; create temporary function st_y as 'com.esri.hadoop.hive.ST_Y'; create temporary function st_srid as 'com.esri.hadoop.hive.ST_SRID';
SELECT ST_Point(longitude, latitude) from mytable LIMIT 1;
I get the below error:
Caused by: org.apache.hadoop.hive.ql.exec.UDFArgumentException: Unable to instantiate UDF implementation class com.esri.hadoop.hive.ST_Point: java.lang.IllegalAccessException: Class org.apache.hadoop.hive.ql.udf.generic.GenericUDFBridge can not access a member of class com.esri.hadoop.hive.ST_Point with modifiers ""
Any ideas on this error?
Thanks.
... View more
Labels:
- Labels:
-
Apache Hive
11-06-2017
05:38 AM
It was a problem with KDC admin server has only Private IP. Now I'm able to connect Hive ODBC by using DBeaver. Thanks.
... View more
11-03-2017
10:21 AM
Thanks, @EricL. It was an FQDN issue. And I've changed FQDN From _Host to HiveServer2. Now I get the following error message: FAILED! [Microsoft][Hardy] (34) Error from server: SSL_connect: unknown protocol. We are using Centrifydc and windows server on the same Network. Any ideas on this error? Thanks again @EricL.
... View more
- Tags:
- Hive
11-01-2017
11:33 AM
1 Kudo
Did anyone fixed this problem without upgrading CDH? I'm using CDH 5.10.1
... View more
10-30-2017
01:05 PM
Hi, We have Kerborised Cluster. I'm able to use the Impala ODBC Driver on a Windows Machine, authenticate with a USERNAME and PASSWORD using SASL. When I try to connect to the Hive ODBC authenticate with Kerberos. I get the following error message: FAILED! [Microsoft][Hardy] (34) Error from server: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Server not found in Kerberos database). Tried: KRB5_CONFIG = C:\Program Files\MIT\Kerberos5\krb5.ini KRB5CCNAME =C:\temp\krb5cache C:\Program Files\MIT\Kerberos5\venkata.keytab C:\Program Files\MIT\Kerberos5>krb5.ini(config) [libdefaults] default_realm = MYKDC.YSTAT.COM dns_lookup_kdc = false dns_lookup_realm = false ticket_lifetime = 86400 renew_lifetime = 604800 forwardable = true default_tgs_enctypes = aes256-cts aes128-cts default_tkt_enctypes = aes256-cts aes128-cts permitted_enctypes = aes256-cts aes128-cts udp_preference_limit = 1 kdc_timeout = 3000 max_life = 1d max_renewable_life = 7d kdc_tcp_ports = 88 ticket_lifetime = 24h renew_lifetime = 7d forwardable = true [realms] MYKDC.YSTAT.COM= { kdc = dc1.MYKDC.YSTAT.COM admin_server = dc1.MYKDC.YSTAT.COM max_renewable_life = 7d 0h 0m 0s default_principal_flags = +renewable } Tried by using different drivers(Simba, Microsoft, Cloudera)Created new users and new keytabs. Any ideas on this error? Thanks.
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Impala
-
Kerberos
10-20-2017
06:41 AM
I'm seeing a lot of simple queries like, “SELECT * FROM DATABASE.TABLE ”, “ SHOW TABLES” and “DESCRIBE database.table ” that hang around in the list and say they’re Executing, even 90-170 minutes after they’ve completed. I would like to know if there is a way to configure Impala to stop/cancel a long-running query after a certain amount of time. Found the below link to set time limit but were using TABLEAU so need more time for tableau(6 users) queries. https://www.cloudera.com/documentation/enterprise/latest/topics/impala_query_timeout_s.html It seems like it’s almost necessary to run two instances of Impala (one for Tableau and one for all other requests). Any suggestions? Thanks.
... View more
Labels:
- Labels:
-
Apache Impala
10-12-2017
12:45 PM
Yes. Impalad version 2.7.0-cdh5.10.1 How to Improve my query performance( SELECT campaign, market, band, SUM(1) FROM tbw GROUP BY 1,2,3; )? Fetched 26038 row(s) in 402.52s Table contains 35,342,034,607 records, 6 986 files, parquet format and 70GB. Impala runs on 10 Nodes(Max Memory 800Gib) Query profile: Query b54a1f635a0503a0:9e55c34f00000000 myflaovr.mycompany.com Profile Query (id=b54a1f635a0503a0:9e55c34f00000000): Summary: Session ID: 874330b779ec785f:6e37ce57b00ee4b7 Session Type: BEESWAX Start Time: 2017-10-12 19:20:14.354494000 End Time: Query Type: QUERY Query State: CREATED Query Status: OK Impala Version: impalad version 2.7.0-cdh5.10.1 RELEASE (build 876895d2a90346e69f2aea02d5528c2125ae7a32) User: venkata@my.com Connected User: venkata@my.com Delegated User: Network Address: ::ffff:10.0.135.192:45724 Default Db: default Sql Statement: select campaign,market, band, SUM(1) FROM my. _tbw GROUP BY 1,2,3 Coordinator: ip-myIP.myflaovr.my.com:22000 Query Options (non default): MEM_LIMIT=2147483648 Plan: ---------------- Estimated Per-Host Requirements: Memory=44.00MB VCores=2 PLAN-ROOT SINK | 04:EXCHANGE [UNPARTITIONED] | hosts=10 per-host-mem=unavailable | tuple-ids=1 row-size=58B cardinality=14130 | 03:AGGREGATE [FINALIZE] | output: sum:merge(1) | group by: campaign, market, band | hosts=10 per-host-mem=10.00MB | tuple-ids=1 row-size=58B cardinality=14130 | 02:EXCHANGE [HASH(campaign,market,band)] | hosts=10 per-host-mem=0B | tuple-ids=1 row-size=58B cardinality=14130 | 01:AGGREGATE [STREAMING] | output: sum(1) | group by: campaign, market, band | hosts=10 per-host-mem=10.00MB | tuple-ids=1 row-size=58B cardinality=14130 | 00:SCAN HDFS [my. _tbw, RANDOM] partitions=6917/6917 files=6986 size=69.05GB table stats: 35342034607 rows total column stats: all hosts=10 per-host-mem=24.00MB tuple-ids=0 row-size=50B cardinality=35342034607 ---------------- Estimated Per-Host Mem: 46137344 Estimated Per-Host VCores: 2 Request Pool: root.users Admission result: Admitted immediately Planner Timeline: 44.968ms - Analysis finished: 1.680ms (1.680ms) - Equivalence classes computed: 1.875ms (195.279us) - Single node plan created: 18.650ms (16.774ms) - Runtime filters computed: 18.706ms (55.973us) - Distributed plan created: 18.892ms (186.353us) - Lineage info computed: 18.961ms (69.433us) - Planning finished: 44.968ms (26.006ms) Query Timeline: 2m21s - Query submitted: 0.000ns (0.000ns) - Planning finished: 280.000ms (280.000ms) - Submit for admission: 334.001ms (54.000ms) - Completed admission: 334.001ms (0.000ns) - Ready to start 21 fragment instances: 373.001ms (39.000ms) - All 21 fragment instances started: 912.003ms (539.001ms) - ComputeScanRangeAssignmentTimer: 11.000ms ImpalaServer: - ClientFetchWaitTimer: 0.000ns - RowMaterializationTimer: 0.000ns Execution Profile b54a1f635a0503a0:9e55c34f00000000:(Total: 539.001ms, non-child: 0.000ns, % non-child: 0.00%) Number of filters: 0 Filter routing table: ID Src. Node Tgt. Node(s) Targets Target type Partition filter Pending (Expected) First arrived Completed Enabled ---------------------------------------------------------------------------------------------------------------------------- Fragment instance start latencies: Count: 21, 25th %-ile: 1ms, 50th %-ile: 2ms, 75th %-ile: 137ms, 90th %-ile: 153ms, 95th %-ile: 164ms, 99.9th %-ile: 176ms - FiltersReceived: 0 (0) - FinalizationTimer: 0.000ns Coordinator Fragment F02: Instance b54a1f635a0503a0:9e55c34f00000000 (ip-myIP.myflaovr.my.com:22000):(Total: 40.000ms, non-child: 1.000ms, % non-child: 2.50%) MemoryUsage(4s000ms): 8.00 KB, 8.00 KB, 8.00 KB, 8.00 KB, 8.00 KB, 8.00 KB, 8.00 KB, 8.00 KB, 8.00 KB, 8.00 KB, 8.00 KB, 8.00 KB, 8.00 KB, 8.00 KB, 8.00 KB, 8.00 KB, 8.00 KB, 8.00 KB, 8.00 KB, 8.00 KB, 8.00 KB, 8.00 KB, 8.00 KB, 8.00 KB, 8.00 KB, 8.00 KB, 8.00 KB, 8.00 KB, 8.00 KB, 8.00 KB, 8.00 KB, 8.00 KB, 8.00 KB, 8.00 KB - AverageThreadTokens: 0.00 - BloomFilterBytes: 0 - PeakMemoryUsage: 8.00 KB (8192) - PerHostPeakMemUsage: 64.09 MB (67201658) - RowsProduced: 0 (0) - TotalNetworkReceiveTime: 0.000ns - TotalNetworkSendTime: 0.000ns - TotalStorageWaitTime: 0.000ns - TotalThreadsInvoluntaryContextSwitches: 0 (0) - TotalThreadsTotalWallClockTime: 0.000ns - TotalThreadsSysTime: 0.000ns - TotalThreadsUserTime: 0.000ns - TotalThreadsVoluntaryContextSwitches: 0 (0) Fragment Instance Lifecycle Timings: - OpenTime: 0.000ns - ExecTreeOpenTime: 0.000ns - PrepareTime: 40.000ms - ExecTreePrepareTime: 0.000ns BlockMgr: - BlockWritesOutstanding: 0 (0) - BlocksCreated: 48 (48) - BlocksRecycled: 0 (0) - BufferedPins: 0 (0) - BytesWritten: 0 - MaxBlockSize: 8.00 MB (8388608) - MemoryLimit: 1.60 GB (1717986944) - PeakMemoryUsage: 784.00 KB (802816) - ScratchFileUsedBytes: 0 - TotalBufferWaitTime: 0.000ns - TotalEncryptionTime: 0.000ns - TotalReadBlockTime: 0.000ns PLAN_ROOT_SINK: - PeakMemoryUsage: 0 CodeGen:(Total: 39.000ms, non-child: 39.000ms, % non-child: 100.00%) - CodegenTime: 0.000ns - CompileTime: 0.000ns - LoadTime: 0.000ns - ModuleBitcodeSize: 1.90 MB (1992592) - NumFunctions: 0 (0) - NumInstructions: 0 (0) - OptimizationTime: 0.000ns - PeakMemoryUsage: 0 - PrepareTime: 38.000ms EXCHANGE_NODE (id=4): BytesReceived(4s000ms): 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 - BytesReceived: 0 - ConvertRowBatchTime: 0.000ns - DeserializeRowBatchTimer: 0.000ns - FirstBatchArrivalWaitTime: 0.000ns - PeakMemoryUsage: 0 - RowsReturned: 0 (0) - RowsReturnedRate: 0 - SendersBlockedTimer: 0.000ns - SendersBlockedTotalTimer(*): 0.000ns Averaged Fragment F01:(Total: 23.600ms, non-child: 0.000ns, % non-child: 0.00%) split sizes: min: 0, max: 0, avg: 0, stddev: 0 - AverageThreadTokens: 1.00 - BloomFilterBytes: 0 - PeakMemoryUsage: 2.80 MB (2934136) - PerHostPeakMemUsage: 58.02 MB (60843115) - RowsProduced: 0 (0) - TotalNetworkReceiveTime: 0.000ns - TotalNetworkSendTime: 0.000ns - TotalStorageWaitTime: 0.000ns - TotalThreadsInvoluntaryContextSwitches: 0 (0) - TotalThreadsTotalWallClockTime: 0.000ns - TotalThreadsSysTime: 0.000ns - TotalThreadsUserTime: 0.000ns - TotalThreadsVoluntaryContextSwitches: 0 (0) Fragment Instance Lifecycle Timings: - OpenTime: 0.000ns - ExecTreeOpenTime: 0.000ns - PrepareTime: 23.600ms - ExecTreePrepareTime: 800.000us BlockMgr: - BlockWritesOutstanding: 0 (0) - BlocksCreated: 48 (48) - BlocksRecycled: 0 (0) - BufferedPins: 0 (0) - BytesWritten: 0 - MaxBlockSize: 8.00 MB (8388608) - MemoryLimit: 1.60 GB (1717986944) - PeakMemoryUsage: 784.00 KB (802816) - ScratchFileUsedBytes: 0 - TotalBufferWaitTime: 0.000ns - TotalEncryptionTime: 0.000ns - TotalReadBlockTime: 0.000ns DataStreamSender (dst_id=4): - BytesSent: 0 - NetworkThroughput(*): 0.00 /sec - OverallThroughput: 0.00 /sec - PeakMemoryUsage: 2.24 KB (2296) - RowsReturned: 0 (0) - SerializeBatchTime: 0.000ns - TransmitDataRPCTime: 0.000ns - UncompressedRowBatchSize: 0 CodeGen:(Total: 149.800ms, non-child: 149.800ms, % non-child: 100.00%) - CodegenTime: 2.499ms - CompileTime: 43.400ms - LoadTime: 0.000ns - ModuleBitcodeSize: 1.90 MB (1992592) - NumFunctions: 48 (48) - NumInstructions: 984 (984) - OptimizationTime: 84.400ms - PeakMemoryUsage: 492.00 KB (503808) - PrepareTime: 21.500ms AGGREGATION_NODE (id=3):(Total: 800.000us, non-child: 800.000us, % non-child: 100.00%) - BuildTime: 0.000ns - GetNewBlockTime: 200.002us - GetResultsTime: 0.000ns - HTResizeTime: 0.000ns - HashBuckets: 0 (0) - LargestPartitionPercent: 0 (0) - MaxPartitionLevel: 0 (0) - NumRepartitions: 0 (0) - PartitionsCreated: 16 (16) - PeakMemoryUsage: 2.31 MB (2419840) - PinTime: 0.000ns - RowsRepartitioned: 0 (0) - RowsReturned: 0 (0) - RowsReturnedRate: 0 - SpilledPartitions: 0 (0) - UnpinTime: 0.000ns EXCHANGE_NODE (id=2): - BytesReceived: 0 - ConvertRowBatchTime: 0.000ns - DeserializeRowBatchTimer: 0.000ns - FirstBatchArrivalWaitTime: 0.000ns - PeakMemoryUsage: 0 - RowsReturned: 0 (0) - RowsReturnedRate: 0 - SendersBlockedTimer: 0.000ns - SendersBlockedTotalTimer(*): 0.000ns Fragment F01: Instance b54a1f635a0503a0:12 (host=ip-myIP.myflaovr.my.com:22000):(Total: 20.000ms, non-child: 0.000ns, % non-child: 0.00%) x 10 nodes. Thanks.
... View more
10-12-2017
06:18 AM
Hi I have created table CREATE EXTERNAL TABLE tbw ( time_stamp STRING, operator STRING, datatype STRING, band STRING ) PARTITIONED BY ( campaign STRING, market STRING, carrier STRING, technology STRING ) STORED AS PARQUET LOCATION '/venkata/table/tbw' ; Impala> explain SELECT campaign,market, band, SUM(1) FROM tbwGROUP BY 1,2,3; Query: explain select campaign,market,band, SUM(1) FROM tbw GROUP BY 1,2,3 +----------------------------------------------------------+ | Explain String | +----------------------------------------------------------+ | Estimated Per-Host Requirements: Memory=44.00MB VCores=2 | | | | PLAN-ROOT SINK | | | | | 04:EXCHANGE [UNPARTITIONED] | | | | | 03:AGGREGATE [FINALIZE] | | | output: sum:merge(1) | | | group by: campaign, market,band | | | | | 02:EXCHANGE [HASH( campaign, market,band )] | | | | | 01:AGGREGATE [STREAMING] | | | output: sum(1) | | | group by: campaign, market,band | | | | | 00:SCAN HDFS [tbw] | | partitions=6917/6917 files=6986 size=69.05GB | +----------------------------------------------------------+ Fetched 18 row(s) in 0.04s final when I ran : Impala> SELECT campaign,market, band, SUM(1) FROM tbw GROUP BY 1,2,3; Query: SELECT campaign,market, band, SUM(1) FROM tbw GROUP BY 1,2,3 Query submitted at: 2017-10-12 13:05:18 (Coordinator: http://ip-myIP.myflaovr.mycompany.com:25000) Query progress can be monitored at: http://ip-myIP.myflaovr.mycompany.com:25000 /query_plan?query_id=ed4d94c1d7bd4e6e:d0e1875e00000000 Fetched 26038 row(s) in 402.52s It's taking around 7mints. Thanks Venkat.
... View more
Labels:
- Labels:
-
Apache Impala
09-21-2017
04:56 AM
Before your reply I've rebooted the node after that it's run well. Now I can see some issue(Port running on other Node) again when I'm running the COMPUTE STATS command on Impala. COMPUTE STATS: 1.I have a table with old Parquet data. 2.Now I have added parquet data with New data types(INT TO STRING) for the same columns. 3.I have created new table in the same location with new schema (Impala > CREATE EXTERNAL TABLE database.table2 like parquet '/home/output/university/client=england/campaign=second/details=students/part-r-00111-5fce6c4d-784e-457f-9a01-aa6d6ec1187c.snappy.parquet'; Impala > SHOW CREATE TABLE table2; Then Ive created tablw with table2 schema) 4. /home/output/university/client=england/campaign=second/details=students/part-r-00111-5fce6c4d-784e-457f-9a01-aa6d6ec1187c.snappy.parquet' has an incompatible Parquet schema for column 'mobistat.psrvoicecdma.systemid'. Column type: STRING, Parquet schema: optional int32 systemid [i:22 d:1 r:0] please help me out in this.
... View more
09-14-2017
05:36 AM
Impala Shell v2.7.0-cdh5.10.1 /var/log/impalad/impalad.WARNING Log line format: [IWEF] mmdd hh:mm:ss.uuuuuu threadid file:line ] msg E0914 10:58:10.457620 94112 logging.cc:121] stderr will be logged to this file. W0914 10:58:10.467237 94112 authentication.cc:1003] LDAP authentication is being used with TLS, but without an --ldap_ca_certificate file, the identity of the LDAP server cannot be verified. Network communication (and hence passwords) could be intercepted by a man-in-the-middle attack E0914 10:58:13.220167 94268 thrift-server.cc:182] ThriftServer 'backend' (on port: 22000) exited due to TException: Could not bind: Transport endpoint is not connected E0914 10:58:13.220221 94112 thrift-server.cc:171] ThriftServer 'backend' (on port: 22000) did not start correctly F0914 10:58:13.221709 94112 impalad-main.cc:89] ThriftServer 'backend' (on port: 22000) did not start correctly . Impalad exiting. /var/log/impalad/impalad.ERROR E0914 10:58:10.457620 94112 logging.cc:121] stderr will be logged to this file. E0914 10:58:13.220167 94268 thrift-server.cc:182] ThriftServer 'backend' (on port: 22000) exited due to TException: Could not bind: Transport endpoint is not connected E0914 10:58:13.220221 94112 thrift-server.cc:171] ThriftServer 'backend' (on port: 22000) did not start correctly F0914 10:58:13.221709 94112 impalad-main.cc:89] ThriftServer 'backend' (on port: 22000) did not start correctly . Impalad exiting. *** Check failure stack trace: *** @ 0x1b566ad (unknown) @ 0x1b58fd6 (unknown) @ 0x1b561cd (unknown) @ 0x1b59a7e (unknown) @ 0xb246ec (unknown) @ 0x7d12f3 (unknown) @ 0x7fb5ed93ab15 __libc_start_main @ 0x80225d (unknown) Picked up JAVA_TOOL_OPTIONS: Wrote minidump to /var/log/impala-minidumps/impalad/1d64946f-d7e0-cedf-19531ffd-1349463b.dmp Process Status Suppress... This role's process failed to start. Need help....
... View more
Labels:
- Labels:
-
Apache Impala
09-14-2017
05:29 AM
/var/log/impalad/impalad.WARNING E0914 10:58:10.457620 94112 logging.cc:121] stderr will be logged to this file. W0914 10:58:10.467237 94112 authentication.cc:1003] LDAP authentication is being used with TLS, but without an --ldap_ca_certificate file, the identity of the LDAP server cannot be verified. Network communication (and hence passwords) could be intercepted by a man-in-the-middle attack E0914 10:58:13.220167 94268 thrift-server.cc:182] ThriftServer 'backend' (on port: 22000) exited due to TException: Could not bind: Transport endpoint is not connected E0914 10:58:13.220221 94112 thrift-server.cc:171] ThriftServer 'backend' (on port: 22000) did not start correctly F0914 10:58:13.221709 94112 impalad-main.cc:89] ThriftServer 'backend' (on port: 22000) did not start correctly . Impalad exiting. Can you help me?
... View more