Tuesday, August 6, 2019

Apra Notes: Takeaways and New Questions in Search of the "Truth"

Even though #aprapd2019 is over, the learning continues for me. As I reflect back on the sessions and conversations, I have some valuable takeaways, but I also have even more thoughts and questions that will lead to more discovery. 

The “journey” has only just begun.

I continue to seek the "truth.” And for me that means finding the constituents most likely and best able to make a financial impact on the mission of my organization.

As I mentioned in a previous post – most, if not all organizations are attempting to answer that question. Most are doing it differently and although some have similarities, each is unique. Despite the many different approaches, it appears each are finding some level of success.

The use of analytics to inform decisions has never been greater. This is a good thing.
Development teams are now employing data analysts and data scientists more than ever before. Also a good thing.

I have no doubt that the use of data is helping organizations be more strategic and even more successful than ever before. Again, this is a really good thing.

So what's my problem?

My problem is that if everyone is doing it a little bit differently - how do we know who is doing it the best?

You might ask "Is that important?"

It is to me. 

Why?

Because the amount of money being raised across the nonprofit world hasn't changed. Philanthropy has basically stayed at 2% of the gross national product (GDP) for the past several decades.
Sure some organizations are setting new records, but the overall fundraising needle hasn't moved.

And what if the organizations raising more money could raise even more money if they found that "truth" I keep referring to? How much time would they save? How many more dollars could they raise?

I have no doubt organizations embracing and implementing analytics into their fundraising efforts are more successful than ever. Those that aren't, are doing a disservice to the organizations they work for (in my opinion).

So, I am back to the question of who is doing it best? Is there one absolute best way? Is there?
Are organizations using outside companies like EverTrue, GG&A, Blackbaud or any number of companies outperforming those that are doing it in house? I really have no idea. 

The funny thing is all of these outside companies are also all doing it differently.

Are you feeling as crazy as I am yet?

Some of you may be thinking I am making a bigger deal out of this than I need to. After all, we should all be pleased that more and more organizations are incorporating data analytics into their processes.

So, let me tell you why I am having this conversation. 

First of all, we all have bias and everyone out there doing analytics has their own set of biases. Each construct of a model has bias. That bias shows up in the selection of the limited number of data points being used to develop a model.

Secondly, how many data points is enough? The most I’ve heard any one organization is using is around 30. Is that enough? How do you decide which ones to settle on? How are those decisions being made?

Don’t answer just yet.

The way I see it, even though organizations are seeing better results, I keep asking myself – “What would the results look like if we could minimize the bias AND increase the number of attributes we use to build a model? Would that impact the results? Would this allow me to identify the constituents truly most likely and most capable of having an impact on my organization's mission (the truth I seek)? What would the return on investment be?"

I believe I may have the answer. At least the answer that makes the most sense to me at this point in time (again, I'm constantly learning).

Machine learning/Artificial Intelligence can make this all possible. In fact it already is.

I know this because I’ve been having on-going conversations with Nathan Chappell of Futurus Group. They’re using machine learning to build models to determine the most “grateful” people in an organization’s database.

They have clients where the models are based on the “low end” of 400+ attributes and on the high end - more than 900 attributes. As I understand things, the results they’re seeing are 4X in an increase in what the organizations had done previously. That's data that piques my interest in a big way.

And get this – the model is updated daily. As new data is added or the data changes – machine learning updates the model. Every. Single. Day. It also continues to learn and refine the model every day through "deep learning."

You see, my quest to find the truth started long before I ever got to Apra. It started when I first l learned of AI for development when Gravyty arrived on the scene. That lead me to more conversations with Gravyty’s Lindsey Athanasiou and then City of Hope’s Nathan Fay. It’s grown from there. I've had conversations with Blackbaud's Carrie Cobb and Lawrence Henze. I've talked to just about anyone willing to and able to discuss this subject.

I’ve been conducting my own research. I’ve been evaluating it all. I’ve been asking tons of questions. As it so happens – Nathan Chappell has been the person I’ve had the most conversations with because Futurus is doing the things that most directly reflect what I want to accomplish (that truth I keep speaking of). I’ve also had conversations with David Lawson, who has also helped me to make sense of it all. All prior to, during and post Apra. David always helps put things in perspective for me.

You know, I expected to find my colleagues at Apra eager to talk about AI and machine learning. That just didn’t happen very much or I should say - nearly as often as I'd like. If those conversations were taking place – I wasn’t aware of them.

As I sat in each session at Apra where universities talked about analytics or what specific things they were doing to model their data – I kept coming back to the things I’ve learned about machine learning and deep learning.

I kept coming back to the fact that there isn’t a limit on the number of attributes a machine could use and through machine learning – patterns could be detected that we as human beings wouldn’t see.

Again, I’m not an expert on any of this, but I believe I can find the “truth” I’m looking for through machine learning. Apra helped make that clear to me in an indirect way.

I want to be perfectly clear here. The fact that more and more organizations are using data analytics to increase their fundraising is truly a great thing. I don't want to diminish the talents and contributions of the data scientists and data analysts out there. I just see the enormous potential machine learning can add to this process. The impact could move the fundraising needle from 2% of the GDP to something bigger. The impact would be monumental.

I know, I know - you want to know how much all of this costs, right? 

Well, that's the wrong question. What you should be asking is “What is the return on investment?” Think big picture. Think impact. 

I’m not an expert on machine learning and I’m not even an expert on data analytics, but what I’ve seen and learned about machine learning and AI has made an impression on me. I’m hopeful that this conversation has piqued your interest enough that you too will begin to do your own due diligence. I hope that we can all begin to have conversations; really meaningful conversations about how machine learning can impact our work and make us much more successful. I believe this could be the tool that propels fundraising across our sector to never-before-seen heights. 

The private sector has been using this technology for years; in fact, many years. We are beginning to see AI applied in a variety of ways in the non-profit sector. More and more companies are using it - companies we are all familiar with and some we are not. Undoubtedly more will appear on the horizon soon.

I believe that five years from now – people who didn’t investigate and invest in machine learning will see that they really missed out on something transformational. Don’t ignore the impact machine learning is having right now. Today.

After reading all of this, you still might not think it's important because after all, your organization just raised the most money it has ever raised. Maybe you're comfortable with where you're at. That's understandable.

But think about what our keynote speaker at Apra had to say about that. Michelle Polar said "The enemy of success is comfort." Are you comfortable with your fundraising? Should you be? Would you like to see philanthropy's percentage of the GDP grow beyond 2%?

If you want to see greater change and greater impact, I invite you to join me on this mission to find the truth.
 
Want to learn more? Ready to start your own journey of learning?

Read this: "How to explain machine learning in plain English."

Listen to David Lawson’s podcast from Jen Fila’s Chat Bytes – “How Big Data can translate into Big Good.

Listen to Nathan Chappell’s podcast with NPO Innovators, who interviewed him about what he’s doing with Futurus. You can pick up Nathan's talk at the 12:30 mark.

Watch Nathan Chappell’s Tedx Talk on “Artificial Intelligence and the Future of Generosity.”

Again, this is just the beginning. I still have so much more to learn about all of this. Consider this an invitation to join me on this journey. The more people we have learning about this – the more likely all the right questions will be asked and the better we can improve our understanding. We really need to ask the tough questions together.

Who’s in? 

To quote Michelle Poler once again - you need to ask yourself "What's the best that can happen?"

1 comment:

  1. We agree! Other industries have jumped on the opportunity to become AI-Enabled and have achieved great results. Maybe it’s time for higher-ed to do the same? AI is a mega-trend. Ignoring it will not make it go away.
    I find that there are a lot of misconceptions about Machine Learning and AI. If higher-ed leaders learnt more about these technologies they could make better informed decisions on if and how to use them.

    I’ve always thought it was strange that people would pick a handful of attributes and try to use that as a proxy for affinity/propensity to give. I wondered: how do you know those are the best predictors? how do you know what worked at their university (a small private liberal arts school on the east coast), is going to work for your university (a large public school on the west coast)? And, why not use more attributes to get a better prediction?

    One caveat to your last question: while there isn’t a limit on the number of attributes a machine could use, there are some cases where you wouldn’t want to put in every attribute you have.

    That was something that surprised me in the beginning. Mathematically, ignoring a feature (essentially attributes) is saying that its weight is 0, so why is it important to drop a feature when the model should hypothetically set the weight for that feature to close to 0 anyway?

    Here’s one trivial example of a feature you should remove: Assume your database randomly generates an Alumni ID for every alumni in your CRM. You shouldn’t you include your database’s random Alumni ID in your model because there is no possible correlation between a random number and affinity/propensity. (A more likely mistake would be including an attribute that would not be known at the time of making the decision.)

    It’s important to note that by excluding certain attributes from your model you can train the algorithm faster. You also reduce overfitting (I recommend reading about the bias-variance tradeoff) and can actually improve the accuracy of the model.

    I was planning to write the full explanation on why, but I realized that there might be a character limit to a Comment. If would like to learn more about why removing certain attributes can actually help, I’d love to have a chat with you, or anyone else who is interested (daniel@cueback.com from CueBack.org).

    ReplyDelete