Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

How to remove the space and dots and convert into lowercase in Pyspark

avatar
Explorer

I have a pyspark dataframe with names like

N. Plainfield
North Plainfield
West Home Land
NEWYORK
newyork
So. Plainfield
S. Plaindield

Some of them contain dots and spaces between initials and some do not. How can they be converted to:

 

n Plainfield
north plainfield
west homeland
newyork
newyork
so plainfield
s plainfield

(with no dots and spaces between initials and 1 space between initials and name)

I tried using the following but it only replaces dots and doesn't remove spaces between initials:

names_modified = names.withColumn("name_clean", regexp_replace("name", r"\.",""))

After removing the whitespaces and dots is there any way get the distinct values.
like this.

north plainfield
west homeland
newyork
so plainfield

1 ACCEPTED SOLUTION

avatar
Master Collaborator

Hi @suri789 Can you try this below and share your feedback?

>>> df.show()
+----------------+
| value |
+----------------+
| N. Plainfield|
|North Plainfield|
| West Home Land|
| NEWYORK|
| newyork|
| So. Plainfield|
| S. Plaindield|
| s Plaindield|
|North Plainfield|
+----------------+
>>> from pyspark.sql.functions import regexp_replace, lower
>>> df_tmp=df.withColumn('value', regexp_replace('value', r'\.',''))
>>> df_tmp.withColumn('value', lower(df_tmp.value)).distinct().show()
+----------------+
| value |
+----------------+
| s plaindield|
| n plainfield|
| west home land|
| newyork|
| so plainfield|
|north plainfield|
+----------------+

View solution in original post

4 REPLIES 4

avatar
Master Collaborator

Hi @suri789 Can you try this below and share your feedback?

>>> df.show()
+----------------+
| value |
+----------------+
| N. Plainfield|
|North Plainfield|
| West Home Land|
| NEWYORK|
| newyork|
| So. Plainfield|
| S. Plaindield|
| s Plaindield|
|North Plainfield|
+----------------+
>>> from pyspark.sql.functions import regexp_replace, lower
>>> df_tmp=df.withColumn('value', regexp_replace('value', r'\.',''))
>>> df_tmp.withColumn('value', lower(df_tmp.value)).distinct().show()
+----------------+
| value |
+----------------+
| s plaindield|
| n plainfield|
| west home land|
| newyork|
| so plainfield|
|north plainfield|
+----------------+

avatar
Explorer

Thanks jagadeesan,

 But Still your getting the duplicate values

avatar
Explorer
so plainfield, s plainfiled both are same

avatar
Master Collaborator

Hi @suri789 these both are different values, I didn't see any duplicate in these. 

so plainfield
s plainfiled

 Also from the output, I didn't see any duplicate values, all are distinct by the values..!  

+----------------+
| value |
+----------------+
| s plaindield|
| n plainfield|
| west home land|
| newyork|
| so plainfield|
|north plainfield|
+----------------+

 
Please note: "n plainfield & north plainfield or s plainfield & so plainfield" are different values, because we didn't write any custom logic like 'n' means 'north' or 's' means 'so'.