Member since
11-07-2016
637
Posts
253
Kudos Received
144
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2245 | 12-06-2018 12:25 PM | |
2302 | 11-27-2018 06:00 PM | |
1792 | 11-22-2018 03:42 PM | |
2846 | 11-20-2018 02:00 PM | |
5166 | 11-19-2018 03:24 PM |
10-10-2018
06:05 AM
1000 is for spark. You can set common.max_count at a global level. You should not have negative results if you increase the limit. But if your data size if very huge then you may need to tweak the above mentioned params accordingly.
... View more
10-10-2018
05:26 AM
@Junfeng Chen, There will be interpreter level properties. For ex: spark has (zeppelin.spark.maxResult) whose default value is 1000. So even if there are more than 1000 rows it will just fetch 1000 rows. If you need more rows, you can increase the limit. You may need to tweak ( zeppelin.interpreter.output.limit, zeppelin.websocket.max.text.message.size, ZEPPELIN_MEM, ZEPPELIN_INTP_MEM ) these properties according to your output size. Refer this link for more info on all the properties https://zeppelin.apache.org/docs/0.7.2/install/configuration.html
... View more
10-10-2018
04:35 AM
@Junfeng Chen, Yes. Zeppelin notebook results are stored in JSON format HDFS (from HDP 2.6) and on native filesystem prior to this version. It is stored in HDFS , so it will not be a problem even if the size is huge. You can check the output here Native FS path : /usr/hdp/current/zeppelin-server/notebook/{notebook-id}/note.json
HDFS path: /user/zeppelin/notebook/{notebook-id}/note.json You can check for results key in the note.json. . If this helps , please take a moment to login and "Accept" the answer
... View more
10-09-2018
02:20 PM
@Lakshmi Prathyusha, I'm not sure of how to do this in Scala. I guess you may have similar date time functions in Scala as well. You can apply this logic in Scala.
... View more
10-09-2018
09:52 AM
@Lakshmi Prathyusha, You can write a simple python snippet like below to read the subfolders. I have put a print statement in the code, but you can replace it some subprocess command to run it. from datetime import date, timedelta from dateutil.relativedelta import relativedelta today = date.today() two_months_back = today - relativedelta(months=2) delta = today - two_months_back for i in range(delta.days + 1): dt = str(two_months_back + timedelta(i)).replace("-", "") print "hdfs dfs -ls s3a://bucket/Folder/1005/SoB/%s" % dt . -Aditya
... View more
10-08-2018
04:46 PM
@Madhura Mhatre, 1) If you have multiple hiveservers, you will have multiple znodes(of the above mentioned forma) under /hiveserver2. When you stop the hiveserver2, respective znode will be deleted from zookeeper. If you stop all the hiveserver instances, then /hiveserver2 will have no znodes under it. You should not delete /hiveserver2 . hiveserver2 is the parent znode , respective hiveserver znodes will be under the parent znode ie /hiveserver2. 2) By installing do you mean you want to add another hiveserver2 to the cluster. If yes, then follow the below doc link https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_hadoop-high-availability/content/additional_hs2_installed_with_ambari.html.
... View more
10-08-2018
04:05 PM
1 Kudo
@Madhura Mhatre, When you start HiveServer2 , it will create a znode under /hiveserver2 with the below format serverUri=hiveserver2host:10000;version=3.1.0.3.0.1.0-187;sequence=0000000000 . After hiveserver2 is stopped, it will delete this znode but not the parent /hiveserver znode. If you want to install the hiveserver2 again, you can do it again from Ambari. . Please "Accept" the answer if this helps 🙂
... View more
10-08-2018
03:32 PM
@Josh Nicholson, You can grep for the location. I am not able to think of other solution for now.
... View more
10-05-2018
04:04 PM
@Josh Nicholson, You can put all your sql commands in a file and run the file using beeline. Ex queries.sql has below statements describe formatted table1;
describe formatted table2; You can run the queries.sql like below beeline -u "{url}" -f queries.sql . Please "Accept" the answer if this helps.
... View more
10-05-2018
02:30 PM
@Maryem Mary, Did this work for you? Please take a moment to login and "Accept" the answer if this helped. This will be really useful for other community users 🙂
... View more