Support Questions

Find answers, ask questions, and share your expertise

Near/real-time Outlook email ingestion

avatar
New Contributor

Hello everyone,

 

I have a task that requires email ingestion as soon as is received in outlook, then extract some information by doing a search based on keywords and store the extracted information in hive : 

 

Near/real-time email ingestion ---> extract value --> Store into hive

 

I read that NIFI can do the job but isn't included in Cloudera.

My question is there any Cloudera service (Flume/Kafka/Spark ....) that can connect to outlook capture emails that satisfy certain criteria, or do I have to make a python code using imaplib and run it using Cron on each time interval.

 

any given hint is appreciated.

1 ACCEPTED SOLUTION

avatar
New Contributor

Hello guys,

 

Yeah, that was a long time ago,I managed to get the job by using the following framework : 

 

Logstash -> Kafka -> Spark 

 

View solution in original post

4 REPLIES 4

avatar
Rising Star

avatar

Since the question was asked, the situation has changed. As soon as Hortonworks and Cloudera merged, NiFi became supported by Cloudera.

 

Shortly after the integrations with CDH were also completed, so that NiFi is now a fully supported and integrated component.

 

Hence the question already contains the answer: Please look into NiFi for solving this usecase.


- Dennis Jaheruddin

If this answer helped, please mark it as 'solved' and/or if it is valuable for future readers please apply 'kudos'.

avatar

This seems relevant:

 

In Python 2, unicode objects can only be printed if they can be converted to ASCII. If it can't be encoded in ASCII, you'll get that error. You probably want to explicitly encode it and then print the resulting str:

print post.text.encode('utf-8')

 


- Dennis Jaheruddin

If this answer helped, please mark it as 'solved' and/or if it is valuable for future readers please apply 'kudos'.

avatar
New Contributor

Hello guys,

 

Yeah, that was a long time ago,I managed to get the job by using the following framework : 

 

Logstash -> Kafka -> Spark