Thursday, 1 June 2017

Big data to increase graduation rates?

Here is a very good example of predictive analytics in a higher education institution. With the help of an outside consulting firm, Georgia State University used data from its past students to predict the factors that decrease a student’s chances for graduation. It is remarkable how the system worked as the system notified the academic adviser if a student was flagged for one of the 700 factors. The university increased the graduation rates 6 percent in last three years, and students are graduating half a semester earlier that saves a lot $ in tuition. It appears that the project was successful, however, what about privacy? Should educational institutions be able to flag their students for their possible future failure, although it is for a good cause? What if the information gets in the hands of people who are not authorized? We will explore more about privacy and ethics in the following weeks, but feel free to comment what you think.

http://www.npr.org/sections/ed/2016/10/30/499200614/how-one-university-used-big-data-to-boost-graduation-rates


Wednesday, 31 May 2017

Analytics of a Tweeting President

I found this story today:

http://www.npr.org/2017/02/04/513469456/when-trump-tweets-this-bot-makes-money

Basically, someone decided to see what happens to the stock market when Trump tweets about something.

A company called T3 created a bot -- "Trump and Dump Bot" -- to play the stock market when Trump tweets something negative about publicly traded companies. Read the story. Or read about how the bot works here:  https://www.t-3.com/works/the-trump-and-dump-bot/

Not to worry, it's all in fun, and they're not profiting. Proceeds to go ASPCA -- save the puppies!



Big Data: Biased?

Sam shared two articles with me and suggested you all might be interested. I concur! Thanks, Sam!

He states: I ran across an interesting article that you may want to share in your blog. It essentially explains that a lot of the big data samples are not representative of the truth and carry inaccuracies that may violate discrimination laws. They mention the Fair Credit Reporting Act, Equal Credit Opportunity Act, Americans with Disabilities Act, the Age Discrimination in Employment Act, the Fair Housing Act, the Genetic Information Nondiscrimination Act, the Federal Trade Commission Act, Etc. Some of this big data may have hidden biases within it and it is possible for accidental discrimination to happen. They talk about a smartphone app for reporting potholes in Boston and how the people in the lower income communities did not have a lot of smartphones - which meant that a lot of the pot holes in the lower income neighborhoods were not getting fixed for some time. They mention that these kinds of accidental discrimination slips could get people in trouble in the Big data world - people getting labeled by the data could end up causing all sorts of problems in the future. Thought that it was a thought provoking look at the future of Big Data.


There was one other article from the Harvard Business Review that was referenced within this article - it also stressed the importance of using caution when Big Data is linked to location or human culture. Interesting to learn that there are several cognitive biases that can be involved in the interpretation of this Big Data

Here is the other referenced article: https://hbr.org/2013/04/the-hidden-biases-in-big-data

So, what do you all think? 
Note that we'll be discussing ethical issues toward the end of the course, and we'll totally want to think about these articles then!