Member since
01-09-2016
56
Posts
44
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
9978 | 12-27-2016 05:00 AM | |
2442 | 09-28-2016 08:19 AM | |
16218 | 09-12-2016 09:44 AM | |
7515 | 02-09-2016 10:48 AM |
09-12-2016
09:44 AM
4 Kudos
Solved it! The problem was with the parameters: hive.llap.daemon.yarn.container.mb
llap_heap_size
Ambari sets default value of llap_heap_size about 96% of hive.llap.daemon.yarn.container.mb (when I move slider "% of Cluster Capacity"), although it should be about 80%. Manual setting the correct parameters allowed to start the HiveServer2 Interactive.
... View more
09-07-2016
06:37 PM
2 Kudos
On
fresh installed HDP-2.5 I can’t start HiveServer2 Interactive. Cluster is High Available. I tried to install HiveServer2 Interactive on both ActiveNN and StandbyNN, but with the same unsuccessful result. I didn't find any obvious exeptions in logs. Here stderr: Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server_interactive.py", line 512, in check_llap_app_status
status = do_retries()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/decorator.py", line 55, in wrapper
return function(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server_interactive.py", line 505, in do_retries
raise Fail(status_str)
Fail: LLAP app 'llap0' current state is COMPLETE.
2016-09-07 20:37:48,705 - LLAP app 'llap0' deployment unsuccessful.
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server_interactive.py", line 535, in <module>
HiveServerInteractive().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
method(env)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 720, in restart
self.start(env, upgrade_type=upgrade_type)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server_interactive.py", line 123, in start
raise Fail("Skipping START of Hive Server Interactive since LLAP app couldn't be STARTED.")
resource_management.core.exceptions.Fail: Skipping START of Hive Server Interactive since LLAP app couldn't be STARTED. sdtout too long, so
here some some excerpts: 2016-09-07 20:31:49,638 - Starting LLAP
2016-09-07 20:31:49,643 - Command: /usr/hdp/current/hive-server2-hive2/bin/hive --service llap --instances 1 --slider-am-container-mb 5120 --size 30720m --cache 0m --xmx 29696m --loglevel INFO --output /var/lib/ambari-agent/tmp/llap-slider2016-09-07_17-31-49 --args " -XX:+AlwaysPreTouch -XX:+UseG1GC -XX:TLABSize=8m -XX:+ResizeTLAB -XX:+UseNUMA -XX:+AggressiveOpts -XX:MetaspaceSize=1024m -XX:InitiatingHeapOccupancyPercent=80 -XX:MaxGCPauseMillis=200"
2016-09-07 20:31:49,643 - checked_call['/usr/hdp/current/hive-server2-hive2/bin/hive --service llap --instances 1 --slider-am-container-mb 5120 --size 30720m --cache 0m --xmx 29696m --loglevel INFO --output /var/lib/ambari-agent/tmp/llap-slider2016-09-07_17-31-49 --args " -XX:+AlwaysPreTouch -XX:+UseG1GC -XX:TLABSize=8m -XX:+ResizeTLAB -XX:+UseNUMA -XX:+AggressiveOpts -XX:MetaspaceSize=1024m -XX:InitiatingHeapOccupancyPercent=80 -XX:MaxGCPauseMillis=200"'] {'logoutput': True, 'user': 'hive', 'stderr': -1}
which: no hbase in (/usr/sbin:/sbin:/usr/lib/ambari-server/*:/usr/sbin:/sbin:/usr/lib/ambari-server/*:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/var/lib/ambari-agent:/var/lib/ambari-agent)
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.5.0.0-1245/hive2/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.5.0.0-1245/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
INFO cli.LlapServiceDriver: LLAP service driver invoked with arguments=--hiveconf
INFO conf.HiveConf: Found configuration file file:/etc/hive2/2.5.0.0-1245/0/conf.server/hive-site.xml
WARN conf.HiveConf: HiveConf of name hive.llap.daemon.allow.permanent.fns does not exist
WARN cli.LlapServiceDriver: Ignoring unknown llap server parameter: [hive.aux.jars.path]
WARN conf.HiveConf: HiveConf of name hive.llap.daemon.allow.permanent.fns does not exist
INFO metastore.HiveMetaStore: 0: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore
INFO metastore.ObjectStore: ObjectStore, initialize called
WARN conf.HiveConf: HiveConf of name hive.llap.daemon.allow.permanent.fns does not exist
INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,Database,Type,FieldSchema,Order"
INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL
INFO metastore.ObjectStore: Initialized ObjectStore
INFO metastore.HiveMetaStore: Added admin role in metastore
INFO metastore.HiveMetaStore: Added public role in metastore
INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty
INFO metastore.HiveMetaStore: 0: get_all_functions
INFO HiveMetaStore.audit: ugi=hive ip=unknown-ip-addr cmd=get_all_functions
WARN cli.LlapServiceDriver: Java versions might not match : JAVA_HOME=[/usr/jdk64/jdk1.8.0_77],process jre=[/usr/jdk64/jdk1.8.0_77/jre]
INFO cli.LlapServiceDriver: Using [/usr/jdk64/jdk1.8.0_77] for JAVA_HOME
INFO cli.LlapServiceDriver: Copied hadoop metrics2 properties file from file:/etc/hive2/2.5.0.0-1245/0/conf.server/hadoop-metrics2-llapdaemon.properties
INFO cli.LlapServiceDriver: LLAP service driver finished
Prepared /var/lib/ambari-agent/tmp/llap-slider2016-09-07_17-31-49/run.sh for running LLAP on Slider
2016-09-07 20:32:18,650 - checked_call returned (0, 'Prepared /var/lib/ambari-agent/tmp/llap-slider2016-09-07_17-31-49/run.sh for running LLAP on Slider', 'which: no hbase in (/usr/sbin:/sbin:/usr/lib/ambari-server/*:/usr/sbin:/sbin:/usr/lib/ambari-server/*:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/var/lib/ambari-agent:/var/lib/ambari-agent)\nSLF4J: Class path contains multiple SLF4J bindings.\nSLF4J: Found binding in [jar:file:/usr/hdp/2.5.0.0-1245/hive2/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]\nSLF4J: Found binding in [jar:file:/usr/hdp/2.5.0.0-1245/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]\nSLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.\nSLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]\nINFO cli.LlapServiceDriver: LLAP service driver invoked with arguments=--hiveconf\nINFO conf.HiveConf: Found configuration file file:/etc/hive2/2.5.0.0-1245/0/conf.server/hive-site.xml\nWARN conf.HiveConf: HiveConf of name hive.llap.daemon.allow.permanent.fns does not exist\nWARN cli.LlapServiceDriver: Ignoring unknown llap server parameter: [hive.aux.jars.path]\nWARN conf.HiveConf: HiveConf of name hive.llap.daemon.allow.permanent.fns does not exist\nINFO metastore.HiveMetaStore: 0: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore\nINFO metastore.ObjectStore: ObjectStore, initialize called\nWARN conf.HiveConf: HiveConf of name hive.llap.daemon.allow.permanent.fns does not exist\nINFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,Database,Type,FieldSchema,Order"\nINFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL\nINFO metastore.ObjectStore: Initialized ObjectStore\nINFO metastore.HiveMetaStore: Added admin role in metastore\nINFO metastore.HiveMetaStore: Added public role in metastore\nINFO metastore.HiveMetaStore: No user is added in admin role, since config is empty\nINFO metastore.HiveMetaStore: 0: get_all_functions\nINFO HiveMetaStore.audit: ugi=hive\tip=unknown-ip-addr\tcmd=get_all_functions\t\nWARN cli.LlapServiceDriver: Java versions might not match : JAVA_HOME=[/usr/jdk64/jdk1.8.0_77],process jre=[/usr/jdk64/jdk1.8.0_77/jre]\nINFO cli.LlapServiceDriver: Using [/usr/jdk64/jdk1.8.0_77] for JAVA_HOME\nINFO cli.LlapServiceDriver: Copied hadoop metrics2 properties file from file:/etc/hive2/2.5.0.0-1245/0/conf.server/hadoop-metrics2-llapdaemon.properties\nINFO cli.LlapServiceDriver: LLAP service driver finished')
2016-09-07 20:32:18,651 - Run file path: /var/lib/ambari-agent/tmp/llap-slider2016-09-07_17-31-49/run.sh
2016-09-07 20:32:18,652 - Execute['/var/lib/ambari-agent/tmp/llap-slider2016-09-07_17-31-49/run.sh'] {'user': 'hive'}
2016-09-07 20:32:48,625 - Submitted LLAP app name : llap0
2016-09-07 20:32:48,627 - checked_call['/usr/hdp/current/hive-server2-hive2/bin/hive --service llapstatus --name llap0 --findAppTimeout 0'] {'logoutput': False, 'user': 'hive', 'stderr': -1}
2016-09-07 20:32:59,607 - checked_call returned (0, '{\n "amInfo" : {\n "appName" : "llap0",\n "appType" : "org-apache-slider",\n "appId" : "application_1473264739795_0004"\n },\n "state" : "LAUNCHING",\n "appStartTime" : 1473269567664\n}', 'which: no hbase in (/usr/sbin:/sbin:/usr/lib/ambari-server/*:/usr/sbin:/sbin:/usr/lib/ambari-server/*:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/var/lib/ambari-agent:/var/lib/ambari-agent)\nSLF4J: Class path contains multiple SLF4J bindings.\nSLF4J: Found binding in [jar:file:/usr/hdp/2.5.0.0-1245/hive2/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]\nSLF4J: Found binding in [jar:file:/usr/hdp/2.5.0.0-1245/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]\nSLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.\nSLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]\nINFO cli.LlapStatusServiceDriver: LLAP status invoked with arguments = --hiveconf\nINFO conf.HiveConf: Found configuration file file:/etc/hive2/2.5.0.0-1245/0/conf.server/hive-site.xml\nWARN conf.HiveConf: HiveConf of name hive.llap.daemon.allow.permanent.fns does not exist\nINFO impl.TimelineClientImpl: Timeline service address: http://hdp-nn1.co.vectis.local:8188/ws/v1/timeline/\nINFO client.AHSProxy: Connecting to Application History server at hdp-nn1.co.vectis.local/10.255.242.180:10200\nINFO cli.LlapStatusServiceDriver: LLAP status finished')
2016-09-07 20:32:59,608 - Received 'llapstatus' command 'output' : {
"amInfo" : {
"appName" : "llap0",
"appType" : "org-apache-slider",
"appId" : "application_1473264739795_0004"
},
"state" : "LAUNCHING",
"appStartTime" : 1473269567664
}
2016-09-07 20:32:59,608 - Marker index for start of JSON data for 'llapsrtatus' comamnd : 0
2016-09-07 20:32:59,610 - LLAP app 'llap0' current state is LAUNCHING.
2016-09-07 20:32:59,611 - Will retry 19 time(s), caught exception: LLAP app 'llap0' current state is LAUNCHING.. Sleeping for 2 sec(s)
2016-09-07 20:33:01,614 - checked_call['/usr/hdp/current/hive-server2-hive2/bin/hive --service llapstatus --name llap0 --findAppTimeout 0'] {'logoutput': False, 'user': 'hive', 'stderr': -1}
2016-09-07 20:33:15,295 - checked_call returned (0, '{\n "amInfo" : {\n "appName" : "llap0",\n "appType" : "org-apache-slider",\n "appId" : "application_1473264739795_0004",\n "containerId" : "container_e12_1473264739795_0004_01_000001",\n "hostname" : "hdp-dn2.co.vectis.local",\n "amWebUrl" : "http://hdp-dn2.co.vectis.local:40485/"\n },\n "state" : "LAUNCHING",\n "originalConfigurationPath" : "hdfs://prodcluster/user/hive/.slider/cluster/llap0/snapshot",\n "generatedConfigurationPath" : "hdfs://prodcluster/user/hive/.slider/cluster/llap0/generated",\n "desiredInstances" : 1,\n "liveInstances" : 0,\n "appStartTime" : 1473269583908\n}', 'which: no hbase in (/usr/sbin:/sbin:/usr/lib/ambari-server/*:/usr/sbin:/sbin:/usr/lib/ambari-server/*:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/var/lib/ambari-agent:/var/lib/ambari-agent)\nSLF4J: Class path contains multiple SLF4J bindings.\nSLF4J: Found binding in [jar:file:/usr/hdp/2.5.0.0-1245/hive2/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]\nSLF4J: Found binding in [jar:file:/usr/hdp/2.5.0.0-1245/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]\nSLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.\nSLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]\nINFO cli.LlapStatusServiceDriver: LLAP status invoked with arguments = --hiveconf\nINFO conf.HiveConf: Found configuration file file:/etc/hive2/2.5.0.0-1245/0/conf.server/hive-site.xml\nWARN conf.HiveConf: HiveConf of name hive.llap.daemon.allow.permanent.fns does not exist\nINFO impl.TimelineClientImpl: Timeline service address: http://hdp-nn1.co.vectis.local:8188/ws/v1/timeline/\nINFO client.AHSProxy: Connecting to Application History server at hdp-nn1.co.vectis.local/10.255.242.180:10200\nWARN curator.CuratorZookeeperClient: session timeout [10000] is less than connection timeout [15000]\nINFO impl.LlapZookeeperRegistryImpl: Llap Zookeeper Registry is enabled with registryid: llap0\nINFO impl.LlapRegistryService: Using LLAP registry type org.apache.hadoop.hive.llap.registry.impl.LlapZookeeperRegistryImpl@4e6f2bb5\nINFO impl.LlapZookeeperRegistryImpl: UGI security is not enabled, or non-daemon environment. Skipping setting up ZK auth.\nINFO imps.CuratorFrameworkImpl: Starting\nINFO impl.LlapRegistryService: Using LLAP registry (client) type: Service LlapRegistryService in state LlapRegistryService: STARTED\nINFO state.ConnectionStateManager: State change: CONNECTED\nINFO cli.LlapStatusServiceDriver: No information found in the LLAP registry\nINFO cli.LlapStatusServiceDriver: LLAP status finished')
2016-09-07 20:33:15,295 - Received 'llapstatus' command 'output' : {
"amInfo" : {
"appName" : "llap0",
"appType" : "org-apache-slider",
"appId" : "application_1473264739795_0004",
"containerId" : "container_e12_1473264739795_0004_01_000001",
"hostname" : "hdp-dn2.co.vectis.local",
"amWebUrl" : "http://hdp-dn2.co.vectis.local:40485/"
},
"state" : "LAUNCHING",
"originalConfigurationPath" : "hdfs://prodcluster/user/hive/.slider/cluster/llap0/snapshot",
"generatedConfigurationPath" : "hdfs://prodcluster/user/hive/.slider/cluster/llap0/generated",
"desiredInstances" : 1,
"liveInstances" : 0,
"appStartTime" : 1473269583908
}
... View more
Labels:
- Labels:
-
Apache Hive
05-03-2016
12:43 PM
Hi @Ian Roberts, thanks for the clarification.
... View more
04-04-2016
02:30 PM
If I use user-limit-factor=2.5 then why do I need to set yarn.scheduler.capacity.root.it.capacity=40? I can set it yarn.scheduler.capacity.root.it.capacity=100. Result will be the same.
Is yarn.scheduler.capacity.root.it.capacity just lower limit?
... View more
04-04-2016
10:05 AM
Hi nmaillard, I tried that already. yarn.scheduler.capacity.root.it.user-limit-factor=2
yarn.scheduler.capacity.root.price.user-limit-factor=1
In this case, the ituser1 picks up 63 containers, but if priceuser1 comes it this time, the ituser1 does not give him the vacant containers, he continues to use them for yourself. I expected ituser1 release 31 containers for priceuser1. But it did not happen. I guess because the ituser1 thinks that is eligible for 63 containers instead of 32.
... View more
04-04-2016
09:03 AM
I created two queues (it, price). I expect that when a user runs a job on a cluster he gets all the free resources of the cluster (77 containers in our case). However, the ituser1 uses only the resources available to its queue (32 containers). Is it possible to allow the ituser1 to use all available resources of the cluster? Total number of containers in cluster - 77 yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
yarn.scheduler.capacity.maximum-am-resource-percent=0.2
yarn.scheduler.capacity.maximum-applications=10000
yarn.scheduler.capacity.node-locality-delay=40
yarn.scheduler.capacity.resource-calculator=org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator
yarn.scheduler.capacity.root.accessible-node-labels=*
yarn.scheduler.capacity.root.accessible-node-labels.default.capacity=-1
yarn.scheduler.capacity.root.accessible-node-labels.default.maximum-capacity=-1
yarn.scheduler.capacity.root.acl_administer_queue=*
yarn.scheduler.capacity.root.capacity=100
yarn.scheduler.capacity.root.default-node-label-expression=
yarn.scheduler.capacity.root.it.user-limit-factor=1
yarn.scheduler.capacity.root.price.user-limit-factor=1
yarn.scheduler.capacity.root.it.state=RUNNING
yarn.scheduler.capacity.root.price.state=RUNNING
yarn.scheduler.capacity.root.it.capacity=40
yarn.scheduler.capacity.root.price.capacity=60
yarn.scheduler.capacity.root.it.maximum-capacity=100
yarn.scheduler.capacity.root.price.maximum-capacity=100
yarn.scheduler.capacity.queue-mappings=u:ituser1:it,u:ituser2:it,u:ituser3:it,u:priceuser1:price,u:priceuser2:price,u:priceuser3:price
yarn.scheduler.capacity.root.it.minimum-userlimit-Percent=50
yarn.scheduler.capacity.root.price.minimum-userlimit-Percent=30
yarn.scheduler.capacity.root.price.default.ordering-policy=fair
yarn.scheduler.capacity.root.it.default.ordering-policy=fair
yarn.scheduler.capacity.root.it.acl_administer_jobs=*
yarn.scheduler.capacity.root.it.acl_submit_applications=*
yarn.scheduler.capacity.root.price.acl_administer_jobs=*
yarn.scheduler.capacity.root.price.acl_submit_applications=*
yarn.scheduler.capacity.root.queues=it,price
... View more
Labels:
- Labels:
-
Apache YARN
02-09-2016
10:48 AM
2 Kudos
@Benjamin Leonhardi, @Sourygna Luangsay I solved the problem! I suffered from this bug IndexOutOfBoundsException with RemoveDynamicPruningBySize (forgot to mention I use Hive 0.14) With this setting my join works even on the semi-annual period. set hive.tez.dynamic.partition.pruning=false; But what about other queries? As I know Dynamic Partition Pruning on TEZ is very good feature. I would not want to disable it globally...
... View more
02-09-2016
07:28 AM
@Benjamin Leonhardi Both tables are bucketed ORC. tkonkurent has 2 buckets (CLUSTERED BY (article)), toprice has 3 buckets (CLUSTERED BY (material)). The contents of the article and material columns are the same. However, in tkonkurent column article is filled by only 20% (that's why only 2 buckets: one big with article = null, second small with article not null) tkonkurent 2 files in partition hadoop fs -du -h /apps/hive/warehouse/price.db/tkonkurent/calday=2016-01-2*
236.1 M /apps/hive/warehouse/price.db/tkonkurent/calday=2016-01-20/000000_0
13.8 M /apps/hive/warehouse/price.db/tkonkurent/calday=2016-01-20/000001_0
225.8 M /apps/hive/warehouse/price.db/tkonkurent/calday=2016-01-21/000000_0
16.6 M /apps/hive/warehouse/price.db/tkonkurent/calday=2016-01-21/000001_0
toprice 3 files in partition hadoop fs -du -h /apps/hive/warehouse/price.db/toprice/calday=2016-01-2*
36.1 M /apps/hive/warehouse/price.db/toprice/calday=2016-01-20/000000_0
35.6 M /apps/hive/warehouse/price.db/toprice/calday=2016-01-20/000001_0
36.1 M /apps/hive/warehouse/price.db/toprice/calday=2016-01-20/000002_0
36.6 M /apps/hive/warehouse/price.db/toprice/calday=2016-01-21/000000_0
36.1 M /apps/hive/warehouse/price.db/toprice/calday=2016-01-21/000001_0
36.5 M /apps/hive/warehouse/price.db/toprice/calday=2016-01-21/000002_0
Here table definition create table toprice
(
PLANT string,
MATERIAL string,
.... 35 columns....
ZCOPA3172 double
)
PARTITIONED BY (CALDAY string)
CLUSTERED BY (MATERIAL) INTO 3 BUCKETS
STORED AS ORC
tblproperties ("orc.compress"="ZLIB", "orc.stripe.size"="67108864","orc.row.index.stride"="10000");
CREATE TABLE tkonkurent
(
additionalaction string,
article string,
.... 46 columns ....
createdtime string
)
PARTITIONED BY (calday string)
CLUSTERED BY (article) SORTED BY (article ASC, marketid ASC) INTO 2 BUCKETS
STORED AS ORC
TBLPROPERTIES ('orc.compress'='ZLIB', 'orc.row.index.stride'='10000', 'orc.stripe.size'='67108864')
Benjamin, do you know what is happening on the mapper initialization stage?
... View more
02-08-2016
12:48 PM
Both subqueries separately work fine. Actually join works for a period of less than 20 days. The problem occurs only when a period greater than 20 days.
... View more
- « Previous
- Next »