Member since
11-03-2017
94
Posts
13
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5191 | 04-11-2018 09:48 AM | |
1841 | 12-08-2017 01:07 PM | |
2367 | 12-01-2017 04:23 PM | |
11696 | 11-06-2017 04:08 PM |
07-03-2018
09:44 AM
What about if I want to use PANDAS and Matplotlib, should I use Pyspark?
... View more
07-02-2018
11:23 AM
1 Kudo
I ve check the list of interpreters that are installed on my zeppelin, and I found out that python doesn't belong to the list. now for use python command I use %spark.pyspark. I would know if it's a good idea to use pyspark instead of python, and is it recommanded to have python interpreted even if I have pyspark which works fine for python code?
... View more
Labels:
- Labels:
-
Apache Zeppelin
04-11-2018
09:48 AM
1 Kudo
The logic is quite simple: 128Mb is a multiple of "2" which means we can represent the number in binary like: 128Mb= 131072 Kb= 134217728 b = 1000000000000000000000000000 Binary With this number we don't wast any bit when we stock data on memory You can say that is a norme of storage of data in the computer science not just for big data
... View more
04-04-2018
11:56 AM
1 Kudo
I have a lot of external table in my hive warhouse and I would to drop all these tables with data automatically. how can I do this?
... View more
Labels:
- Labels:
-
Apache Hive
04-01-2018
05:10 PM
I had a external table that contains some string columns, now I need to change the datatype of some columns, so I used : ALTER TABLE table CHANGE col col type; but this query gives me a error: org.apache.spark.sql.AnalysisException: ALTER TABLE CHANGE COLUMN is not supported for changing column 'id' with type 'StringType' to 'id' with type 'LongType'; any suggestion would be greatly welcome, thanks
... View more
Labels:
- Labels:
-
Apache Hive
03-26-2018
09:33 AM
1 Kudo
@Andrea L like Michael Young said, Sqoop doesn't suppot importing from or exporting to Hive. it's also recommanded to use the export/import hive queries to move your data between two hive, check this out: https://cwiki.apache.org/confluence/display/Hive/LanguageManual+ImportExport however, the CSV method can generate problems of separator or even if the data is numerous it would be necessary to group them in one CSV file, which is not reassuring.
... View more
03-20-2018
08:02 PM
@Shashank V C Why don't you use "beeline"!! cuz I don't think hdfs can know notice the difference between external table and no external table!
... View more
03-01-2018
09:29 AM
Hi @hema moger if your remote server isn't belong to your cluster, you will have bring data on one of server in your cluster and than use "hdfs fs" to put data on your cluster. in other case, i.e the remote server belong to the cluster , than you just need to run "hadoop fs -put" of you're csv file
... View more
02-26-2018
12:35 PM
@Jay Kumar SenSharma nc: connect to 10.166.54.12 port 8020 (tcp) failed: Connection refused
tcp 0 0 10.166.54.12:8020 0.0.0.0:* LISTEN 19578/java
... View more