Member since
06-02-2020
75
Posts
15
Kudos Received
10
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
318 | 10-27-2021 11:10 PM | |
371 | 10-15-2021 11:55 PM | |
889 | 09-14-2021 10:46 PM | |
539 | 08-30-2021 11:24 PM | |
2183 | 08-11-2021 08:26 PM |
04-29-2022
05:29 AM
Hi @JoeR Spark will support reading files with multiple file formats like parquet, orc, json, xml, avro,csv etc. I think there is no direct mechanism to read the data from the payload. If I found a different solution, I will share it with you.
... View more
04-06-2022
03:32 AM
1 Kudo
Hi @yagoaparecidoti Based on the exception, it looks like Kerberos issue. Due to End of Life (EOL) CDH 5.X and CDH 6.X cluster, we are not able to provide any solutions. You can use CDP cluster to test your scenario. It is supported for both Spark2 and Spark3 as well.
... View more
04-05-2022
12:46 AM
1 Kudo
In this post, we will learn how to create a Kafka topic and produce and consume messages from a Kafka topic. After testing the basic producer and consumer example, we will test it with Spark using spark-examples.jar file.
Creating a Kafka topic:
# kafka bootstrap server
KAFKA_BROKERS="localhost:9092"
# kafka topic name
TOPIC_NAME="word_count_topic"
# group name
GROUP_NAME="spark-kafka-group"
# creating a topic
/opt/cloudera/parcels/CDH/lib/kafka/bin/kafka-topics.sh --create --topic ${TOPIC_NAME} --bootstrap-server ${KAFKA_BROKERS}
# describing a topic
/opt/cloudera/parcels/CDH/lib/kafka/bin/kafka-topics.sh --describe --topic ${TOPIC_NAME} --bootstrap-server ${KAFKA_BROKERS}
Producing messages to Kafka topic:
# producing kafka messages
/opt/cloudera/parcels/CDH/lib/kafka/bin/kafka-console-producer.sh --topic ${TOPIC_NAME} --broker-list ${KAFKA_BROKERS}
Consuming messages from Kafka topic:
# consuming kafka messages
/opt/cloudera/parcels/CDH/lib/kafka/bin/kafka-console-consumer.sh --bootstrap-server ${KAFKA_BROKERS} --group ${GROUP_NAME} --topic ${TOPIC_NAME} --from-beginning
Submitting the Spark KafkaWordCount example:
spark-submit \
--master yarn \
--deploy-mode client \
--packages org.apache.spark:spark-streaming-kafka-0-10_2.11:2.4.7.7.1.7.0-551 \
--repositories https://repository.cloudera.com/artifactory/cloudera-repos/ \
--class org.apache.spark.examples.streaming.DirectKafkaWordCount \
/opt/cloudera/parcels/CDH/lib/spark/examples/jars/spark-examples_*.jar ${KAFKA_BROKERS} ${GROUP_NAME} ${TOPIC_NAME}
... View more
Labels:
02-22-2022
10:35 PM
Hi @Rajeshhadoop I think it is the not right way to ask set of questions in single community article. Please create a new thread for any kind of questions.
... View more
02-21-2022
07:39 PM
Please go through the following article. https://community.cloudera.com/t5/Community-Articles/Spark-Memory-Management/ta-p/317794 Unified Memory Manager is introduced in Spark 1.6 onwards. There is no much changes after Unified. Spark 3 has some changes in memory management.
... View more
02-21-2022
02:12 AM
Hi @Rajeshhadoop Please find the few of the references: https://spark.apache.org/docs/2.4.0/sql-migration-guide.html https://blog.knoldus.com/migration-from-spark-1-x-to-spark-2-x/ https://docs.cloudera.com/documentation/enterprise/upgrade/topics/ug_spark_post.html Note: We don't have document for upgrading straight from spark 1.x to spark 2.4
... View more
02-14-2022
07:48 PM
1 Kudo
Hi @Rekasri Due to code related issue, the above exception is occurred. Please check your code while creating/closing the SparkSession object. Note: If already found the answer please share the issue code and fixed code, it will helpful for others.
... View more
02-08-2022
04:29 AM
Hi @kanikach I think we don't have mechanism to tell what are all the changes is happen in current release vs previous release other than approaching to the engineering team. If you want more details changes better you can raise a cloudera case and we will check with eng team and get back to you.
... View more
02-08-2022
04:12 AM
Hi @AmineCHERIFI I am suspecting due to timezone it is causing the issue. To check further, Could you please share sample data what you have created and table structure. We will try to reproduce internally? Note: Have you tried the do the same logic with out HWC. Please test and share the results as well. For reading/writing externals tables HWC is not required.
... View more
02-08-2022
04:07 AM
1 Kudo
Hi @victorescosta You need to check the producer code at which format kafka message is produced and what kind of Serializer class you have used. Same format/serialiser you need to use while deserialising the data. For example while writing data if you have used Avro then while deserialising you need to Avro. @araujo You are right. Customer needs to check their producer code and serializer class.
... View more
02-08-2022
04:01 AM
Hi @loridigia If cluster/application is not enabled dynamic allocation and if you set --conf spark.executor.instances=1 then it will launch only 1 executor. Apart from executor, you will see AM/driver in the Executor tab Spark UI.
... View more
12-07-2021
10:29 PM
1 Kudo
In this article, we will learn how to integrate Zeppelin JDBC (Phoenix) interpreter example.
1. Configuring the JDBC (Phoenix) interpreter: Login to Zeppelin UI -> Click on the user name (in my case, admin) at the right-hand corner. It will display a menu > click on Interpreter.
Click on + Create at the right-hand side of the screen.
It will display a popup menu. Enter Interpreter Name as jdbc and select Interpreter Group as jdbc. Then, it will populate Properties in table format.
Click on + button and add the Phoenix-related properties according to your cluster, and click on the Save button.
phoenix.driver
org.apache.phoenix.jdbc.PhoenixDriver
phoenix.url
jdbc:phoenix:localhost:2181:/hbase
phoenix.user
phoenix.password
2. Creating the Notebook:
Click Notebook dropdown menu in the top left-hand corner and select Create new note and enter Note Name as Phoenix_Test,and select Default Interpreter as jdbc. Finally, click on Create button.
3. Running the Phoenix queries using jdbc (Phoenix) interpreter in Notebook:
%jdbc(phoenix)
CREATE TABLE IF NOT EXISTS Employee (
id INTEGER PRIMARY KEY,
name VARCHAR(225),
salary FLOAT
)
%jdbc(phoenix)
UPSERT INTO Employee VALUES(1, 'Ranga Reddy', 24000)
%jdbc(phoenix)
UPSERT INTO Employee (id, name, salary) VALUES(2, 'Nishantha', 10000)
%jdbc(phoenix)
select * from Employee
4. Final Results:
Happy Learning.
... View more
10-31-2021
10:24 PM
Hi By default /opt/cloudera/cm-agent/service/hive/hive.sh file, TEZ_JARS property will be TEZ_JARS="$PARCELS_ROOT/CDH/jars/tez-*:$PARCELS_ROOT/CDH/lib/tez/*.jar:$CONF_DIR/tez-conf" We need to update TEZ_JARS property like below: TEZ_JARS="/opt/cloudera/parcels/CDH/jars/tez-:/opt/cloudera/parcels/CDH/lib/tez/.jar:$CONF_DIR/tez-conf" After that we need to Restart the service.
... View more
10-28-2021
12:05 AM
Hi @Marwn Please check the application logs to identify why application startup is taking X mins. Without providing application logs very difficult to provide.
... View more
10-27-2021
11:10 PM
Hi @EBH Spark application is failed with OOM error. To understand why OOM error, we need to go through Spark event logs, application logs and spark submit command. Currently you are not shared any of the logs. Next step is, try to increase the executor/driver memory and set memory overhead value as 0.1 or 0.2 % of driver/executor memory. If still issue is not resolved please raise an cloudera case we will work on this issue.
... View more
10-25-2021
10:59 PM
Hi @SimonBergerard Spark configuration parameters precedence (left is low and right is high) of the order is: spark-defaults.conf --> spark-submit/spark-shell --> spark code (scala/java/python) If you want to see the parameter values you can run with --verbose mode. spark-submit --verbose Please recheck the spark-submit command and parameters once again. --conf spark.eventLog.enabled=true --conf spark.eventLog.dir=<directory> --conf spark.submit.deployMode=cluster
... View more
10-17-2021
11:54 PM
Hi @Paop We don't have enough information (how much data, spark submit command etc) to provide solution. Please raise a case for this issue.
... View more
10-15-2021
11:55 PM
@LegallyBind For each python, you need to create separate interpreter.
... View more
10-07-2021
04:28 AM
Hi @shivanageshch EMR is not part of cloudera. If you are using CDP/HDP cluster, go through the following tutorial. Livy Configuration: Add the following properties to the livy.conf file: # Use this keystore for the SSL certificate and key.
livy.keystore = <path-to-ssl_keystore>
# Specify the keystore password.
livy.keystore.password = <keystore_password>
# Specify the key password.
livy.key-password = <key_password> Access Livy Server: After enabling SSL over Livy server. Livy server should be accessible over https protocol. https://<livy host>:<livy port> References: 1. https://docs.cloudera.com/cdp-private-cloud-base/latest/security-encrypting-data-in-transit/topics/livy-configure-tls-ssl.html Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
10-06-2021
09:34 PM
Hi @LegallyBind Please find the following tutorial. https://community.cloudera.com/t5/Customer/How-to-use-multiple-versions-of-Python-in-Zeppelin/ta-p/271226
... View more
09-27-2021
10:52 PM
Hi @RAkhmadeev A RejectedExecutionException may be thrown by a ThreadPoolExecutor for the following reasons: The thread pool is shutdown() before the thread is processed. The thread pool queue is complete and no further threads can be created. From your case, this issue is occurring due to HBASE-24844 issue. Please raise a cloudera case to work on this issue further.
... View more
09-24-2021
03:53 AM
Hi @Tomas79 While launching spark-shell, you need to add spark.yarn.access.hadoopFileSystems parameter. And also ensure to add dfs.namenode.kerberos.principal.pattern parameter value * in core-site.xml file. For example, # spark-shell --conf spark.yarn.access.hadoopFileSystems="hdfs://c1441-node2.coelab.cloudera.com:8020"
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
21/09/24 07:23:25 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered!
Spark context Web UI available at http://c2441-node2.supportlab.cloudera.com:4040
Spark context available as 'sc' (master = yarn, app id = application_1632395260786_0004).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.4.0.7.1.6.0-297
/_/
Using Scala version 2.11.12 (OpenJDK 64-Bit Server VM, Java 1.8.0_232)
Type in expressions to have them evaluated.
Type :help for more information.
scala> val textDF = spark.read.textFile("hdfs://c1441-node2.coelab.cloudera.com:8020/tmp/ranga_clusterb_test.txt")
textDF: org.apache.spark.sql.Dataset[String] = [value: string]
scala> textDF.show(false)
+---------------------+
|value |
+---------------------+
|Hello Ranga, |
| |
+---------------------+
... View more
09-14-2021
11:38 PM
1 Kudo
Hi @Seaport As you know, resource managers like yarn, standalone, kubernets will create containers. Internally RMs will use shell script to create containers. Based on resources, it will create one or more containers in the same node.
... View more
09-14-2021
10:46 PM
Hi @Seaport Please check the following example. It will may help. https://kontext.tech/column/spark/284/pyspark-convert-json-string-column-to-array-of-object-structtype-in-data-frame
... View more
09-08-2021
03:50 AM
In this tutorial, we will learn how to create Apache Ozone volumes, buckets, and keys. After that, we will see how to create an Apache Hive table using Apache Ozone, and finally how we can insert/read the data from Apache Spark.
Ozone
Create the volume with the name vol1. # ozone sh volume create /vol1
21/08/25 06:23:27 INFO rpc.RpcClient: Creating Volume: vol1, with root as owner.
Create the bucket with the name bucket1 under vol1 . # ozone sh bucket create /vol1/bucket1
21/08/25 06:24:09 INFO rpc.RpcClient: Creating Bucket: vol1/bucket1, with Versioning false and Storage Type set to DISK and Encryption set to false
Hive
Launch the beeline shell.
Create the employee table in Hive.
Note: Update the om.host.example.com value.
CREATE DATABASE IF NOT EXISTS ozone_db;
USE ozone_db;
CREATE EXTERNAL TABLE IF NOT EXISTS `employee`(
`id` bigint,
`name` string,
`age` smallint)
STORED AS parquet
LOCATION 'o3fs://bucket1.vol1.om.host.example.com/employee';
Spark
Spark2:
Launch spark-shell spark-shell
Run the following query to insert/read the data from the Hive employee table. spark.sql("""INSERT INTO TABLE ozone_db.employee VALUES (1, "Ranga", 33)""")
spark.sql("""INSERT INTO TABLE ozone_db.employee VALUES (2, "Nishanth", 3)""")
spark.sql("""INSERT INTO TABLE ozone_db.employee VALUES (3, "Raja", 59)""")
spark.sql("SELECT * FROM ozone_db.employee").show()
Spark3:
Launch spark3-shell spark3-shell --jars /opt/cloudera/parcels/CDH/lib/hadoop-ozone/hadoop-ozone-filesystem-hadoop3-*.jar
Run the following query to insert/read the data from the Hive employee table. spark.sql("""INSERT INTO TABLE ozone_db.employee VALUES (1, "Ranga", 33)""")
spark.sql("""INSERT INTO TABLE ozone_db.employee VALUES (2, "Nishanth", 3)""")
spark.sql("""INSERT INTO TABLE ozone_db.employee VALUES (3, "Raja", 59)""")
spark.sql("SELECT * FROM ozone_db.employee").show()
Kerberized environment
Pre-requisites:
Create a user and provide proper Ranger permissions to create Ozone volume and buckets, etc.
kinit with the user.
Spark2:
Launch spark-shell Note: Before launching spark-shell u pdate the om.host.example.com value. spark-shell \
--conf spark.yarn.access.hadoopFileSystems=o3fs://bucket1.vol1.om.host.example.com:9862
Run the following query to insert/read the data from Hive employee table. spark.sql("""INSERT INTO TABLE ozone_db.employee VALUES (1, "Ranga", 33)""")
spark.sql("""INSERT INTO TABLE ozone_db.employee VALUES (2, "Nishanth", 3)""")
spark.sql("""INSERT INTO TABLE ozone_db.employee VALUES (3, "Raja", 59)""")
spark.sql("SELECT * FROM ozone_db.employee").show()
Spark3:
Launch spark3-shell Note: Before launching spark-shell u pdate the om.host.example.com value. spark3-shell \
--conf spark.kerberos.access.hadoopFileSystems=o3fs://bucket1.vol1.om.host.example.com:9862 \
--jars /opt/cloudera/parcels/CDH/lib/hadoop-ozone/hadoop-ozone-filesystem-hadoop3-*.jar
Run the following query to insert/read the data from the Hive employee table. spark.sql("""INSERT INTO TABLE ozone_db.employee VALUES (1, "Ranga", 33)""")
spark.sql("""INSERT INTO TABLE ozone_db.employee VALUES (2, "Nishanth", 3)""")
spark.sql("""INSERT INTO TABLE ozone_db.employee VALUES (3, "Raja", 59)""")
spark.sql("SELECT * FROM ozone_db.employee").show()
Notes:
If you get the java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.ozone.OzoneFileSystem not foundthen add the /opt/cloudera/parcels/CDH/jars/hadoop-ozone-filesystem-hadoop3-*.jar to spark class path using --jars option.
In a Kerberized environment, mandatorily, we need to specify the spark.yarn.access.hadoopFileSystems configuration, otherwise, it will display the following error. java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
Thanks for reading this article. If you liked this article, you can give kudos.
... View more
Labels:
08-31-2021
11:55 PM
Hi @yudh3 This application is deployed first time or it is an existing application. If it is first time then you need to tune according to what kind of operation you are doing. If an existing application, this issue is occurring recently or from long time it is there. If it is occurring recently, is there any data change or any hdfs/hive issues. Without understanding the logs, difficult to tell what is the exact issue. Please go ahead and create a case for this issue, we will work on.
... View more
08-30-2021
11:24 PM
Hi @Seaport Yes it is required when i f you want to run Python UDFs or do something outside spark SQL operations in your application. If you are just using the Spark SQL API there’s no runtime requirement for python. If you are going to install Spark3, please check below supported versions: Spark 2.4 supports Python 2.7 and 3.4-3.7. Spark 3.0 supports Python 2.7 and 3.4 and higher, although support for Python 2 and 3.4 to 3.5 is deprecated. Spark 3.1 supports Python 3.6 and higher. CDS Powered by Apache Spark requires one of the following Python versions: Python 2.7 or higher, when using Python 2. Python 3.4 or higher, when using Python 3. (CDS 2.0 only supports Python 3.4 and 3.5; CDS 2.1 and higher include support for Python 3.6 and higher). Python 3.4 or higher, when using Python 3 (CDS 3). Note: Spark 2.4 is not compatible with Python 3.8. The latest version recommended is Python 3.4+ (https://spark.apache.org/docs/2.4.0/#downloading). The Apache Jira SPARK-29536 related to Python 3.8 is fixed in Spark3.
... View more
08-30-2021
07:52 PM
Hi @Sbofa Yes you are right. Based on kind it will decide which kind of spark shell needs to start.
... View more
08-25-2021
08:51 PM
In this tutorial, we will learn how to create Apache Ozone volumes, buckets, and keys. After that, we will see how we can access Apache Ozone data in Apache Spark.
Ozone
Create the volume with the name vol1 in Apache Ozone. # ozone sh volume create /vol1
21/08/25 06:23:27 INFO rpc.RpcClient: Creating Volume: vol1, with root as owner.
Create the bucket with the name bucket1 under vol1 . # ozone sh bucket create /vol1/bucket1
21/08/25 06:24:09 INFO rpc.RpcClient: Creating Bucket: vol1/bucket1, with Versioning false and Storage Type set to DISK and Encryption set to false
Create the employee.csv file to upload to Ozone. # vi /tmp/employee.csv
id,name,age
1,Ranga,33
2,Nishanth,4
3,Raja,60
Upload the employee.csv file to Ozone # ozone sh key put /vol1/bucket1/employee.csv /tmp/employee.csv
Add the fs.o3fs.impl property to core-site.xml
Go to Cloudera Manager > HDFS > Configuration > search for core-site.xml > Cluster-wide Advanced Configuration Snippet (Safety Valve) for core-site.xml <property>
<name>fs.o3fs.impl</name>
<value>org.apache.hadoop.fs.ozone.OzoneFileSystem</value>
</property>
Display the files created earlier using 'hdfs' command. Note: Before running the following command, update the om-host.example.com value. hdfs dfs -ls o3fs://bucket1.vol1.om-host.example.com/
Spark
Launch spark-shell spark spark-shell
Run the following command to print the employee.csv file content. Note: Update the omHost value. scala> val omHost="om.host.example.com"
scala> val df=spark.read.option("header", "true").option("inferSchema", "true").csv(s"o3fs://bucket1.vol1.${omHost}/employee.csv")
scala> df.show()
+---+--------+---+
| id| name|age|
+---+--------+---+
| 1| Ranga| 33|
| 2|Nishanth| 4|
| 3| Raja| 60|
+---+--------+---+
Kerberized environment
Pre-requisites:
Create a user and provide proper Ranger permissions to create Ozone volume and buckets, etc.
kinit with the user
Steps:
Create Ozone volumes, buckets, and keys mentioned in Ozone section.
Launch spark-shell
Replace the KEY_TAB, PRINCIPAL, and om.host.example.com in spark-shell spark-shell \
--keytab ${KEY_TAB} \
--principal ${PRINCIPAL} \
--conf spark.yarn.access.hadoopFileSystems=o3fs://bucket1.vol1.om.host.example.com:9862 Note: In a Kerberized environment, mandatorily, we need to specify the spark.yarn.access.hadoopFileSystems configuration, otherwise, it will display the following error: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
Run the following command to print the employee.csv file content. Note: Update the omHost value. scala> val omHost="om.host.example.com"
scala> val df=spark.read.option("header", "true").option("inferSchema", "true").csv(s"o3fs://bucket1.vol1.${omHost}/employee.csv")
scala> df.show()
+---+--------+---+
| id| name|age|
+---+--------+---+
| 1| Ranga| 33|
| 2|Nishanth| 4|
| 3| Raja| 60|
+---+--------+---+
scala> val age30DF = df.filter(df("age") > 30)
scala> val outputPath = s"o3fs://bucket1.vol1.${omHost}/employee_age30.csv"
scala> age30DF.write.option("header", "true").mode("overwrite").csv(outputPath)
scala> val df2=spark.read.option("header", "true").option("inferSchema", "true").csv(outputPath)
scala> df2.show()
+---+-----+---+
| id| name|age|
+---+-----+---+
| 1|Ranga| 33|
| 3| Raja| 60|
+---+-----+---+
Note: If you get the java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.ozone.OzoneFileSystem not found, add the /opt/cloudera/parcels/CDH/jars/hadoop-ozone-filesystem-hadoop3-*.jar to spark class path using --jars option.
Thanks for reading this article. If you liked this article, you can give kudos.
... View more
Labels:
08-24-2021
02:22 AM
In this article, we will learn how to register a Hive UDFs using Spark HiveWarehouseSession.
Download and build the Spark Hive UDF example. git clone https://github.com/rangareddy/spark-hive-udf
cd spark-hive-udf
mvn clean package -DskipTests
Copy the target/spark-hive-udf-1.0.0-SNAPSHOT.jar to the edge node.
Login to edge node and upload the spark-hive-udf-1.0.0-SNAPSHOT.jar to HDFS location for example, /tmp. hdfs dfs -put ./brickhouse-0.7.1-SNAPSHOT.jar /tmp
Launch the spark-shell with 'hwc' parameters. spark-shell \
--jars /opt/cloudera/parcels/CDH/jars/hive-warehouse-connector-assembly-*.jar \
--conf spark.sql.hive.hiveserver2.jdbc.url='jdbc:hive2://hiveserver2_host1:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2' \
--conf spark.sql.hive.hwc.execution.mode=spark \
--conf spark.datasource.hive.warehouse.metastoreUri='thrift://metastore_host:9083' \
--conf spark.datasource.hive.warehouse.load.staging.dir='/tmp' \
--conf spark.datasource.hive.warehouse.user.name=hive \
--conf spark.datasource.hive.warehouse.password=hive \
--conf spark.datasource.hive.warehouse.smartExecution=false \
--conf spark.datasource.hive.warehouse.read.via.llap=false \
--conf spark.datasource.hive.warehouse.read.jdbc.mode=cluster \
--conf spark.datasource.hive.warehouse.read.mode=DIRECT_READER_V2 \
--conf spark.security.credentials.hiveserver2.enabled=false \
--conf spark.sql.extensions=com.hortonworks.spark.sql.rule.Extensions
Create the HiveWarehouseSession. import com.hortonworks.hwc.HiveWarehouseSession
import com.hortonworks.hwc.HiveWarehouseSession._
val hive = HiveWarehouseSession.session(spark).build()
Execute the following statement to register a Hive UDF. hive.executeUpdate("CREATE FUNCTION uppercase AS 'com.ranga.spark.hive.udf.UpperCaseUDF' USING JAR 'hdfs:///tmp/spark-hive-udf-1.0.0-SNAPSHOT.jar'")
Test the registered function, for example, uppercase. scala> val data1 = hive.executeQuery("select id, uppercase(name), age, salary from employee")
scala> data1.show()
+---+-----------------------+---+---------+
| id|default.uppercase(name)|age| salary|
+---+-----------------------+---+---------+
| 1| RANGA| 32| 245000.3|
| 2| NISHANTH| 2| 345000.1|
| 3| RAJA| 32|245000.86|
| 4| MANI| 14| 45000.0|
+---+-----------------------+---+---------+
Thanks for reading this article.
... View more
Labels: