Extract Data from LinkedIn, Facebook, and Twitter
Posted by: Informatica Enterprise Data Integration
Search and extract the social media data based on the criteria that you specify from popular social networks like LinkedIn, Facebook and Twitter. Download for Free!
Overview
PowerExchange for LinkedIn, PowerExchange for Facebook, and PowerExchange for Twitter can extract social media data that match the search criteria that you specify, making it much more efficient for social media data mining. PowerExchange for LinkedIn, PowerExchange for Facebook, and PowerExchange for Twitter are available for free with Informatica PowerCenter Express.You can define search criteria, search for topics, and extract social media data from all three social media networks. You can load the extracted social media data to a target and then use the data for text analytics and sentiment analysis. Download Content
- User guide to setup connections to Facebook, Twitter, LinkedIn in Informatica and create mappings.
- Sample Mappings to extract
- Facebook posts from public Facebook page.
- LinkedIn profiles of all the connections for the user account for which you provide an authentication token.
- Tweets that contain specific keyword.
Features
- Informatica 9.6.1
- Informatica PowerCenter Express 9.6.1
Comments (9)
Hi Team,
We are testing the loading of a 667K rows table READING from an on-premises database and WRITING to cloud database.
The test was done with Informatica and a Python program by Oracle (cx_OracleTools: https://github.com/anthony-tuininga/cx_OracleTools/blob/main/CopyData.py)
Questions:
-How can we achieve the same 9seconds performance in Informatica?
-Are there any Informatica settings where we can influence the bulk insert array-size and read prefetch size the same as what we have done on the Python program and Oracle Client?
Hi Team,
We are testing the loading of a 667K rows table READING from an on-premises database and WRITING to cloud database.
The test was done with Informatica and a Python program by Oracle (cx_OracleTools: https://github.com/anthony-tuininga/cx_OracleTools/blob/main/CopyData.py)
Using the table WC_PTC_PSA_EST_VS_ACTUALS_FS (667K rows), here are the timings:
• Informatica on-prem to EXACS (Oracle Cloud) = 15mins
• Python program (CopyData) on-prem to EXACS (Oracle Cloud) = 9seconds
The Python code uses executemany for a more efficient bulk load (INSERT), and we also tweaked the Oracle client prefetch settings for higher (READ) throughput.
Questions:
-How can we achieve the same 9seconds performance in Informatica?
-Are there any Informatica settings where we can influence the bulk insert array-size and read prefetch size the same as what we have done on the Python program and Oracle Client?
Hi Team,
We are testing the loading of a 667K rows table READING from an on-premises database and WRITING to cloud database.
The test was done with Informatica and a Python program by Oracle (cx_OracleTools: https://github.com/anthony-tuininga/cx_OracleTools/blob/main/CopyData.py)
Using the table WC_PTC_PSA_EST_VS_ACTUALS_FS (667K rows), here are the timings:
• Informatica on-prem to EXACS (Oracle Cloud) = 15mins
• Python program (CopyData) on-prem to EXACS (Oracle Cloud) = 9seconds
The Python code uses executemany for a more efficient bulk load (INSERT), and we also tweaked the Oracle client prefetch settings for higher (READ) throughput.
Questions:
-How can we achieve the same 9seconds performance in Informatica?
-Are there any Informatica settings where we can influence the bulk insert array-size and read prefetch size the same as what we have done on the Python program and Oracle Client?