In the wake of all the rigmarole going on with the Snowden leaks and details of the NSA intelligence gathering apparatus, it's suddenly very clear to the average person just how much data is out there, and how difficult it must be to recognize, organize and filter it for a usable purpose. Even if we try to minimize our digital footprint, each of us nonetheless generates an incredible amount of data that represents US in the digital realm. To talk us through what systems are used to parse such vast quantities of data into a usable format, we are happy to welcome Stuart Geiger to this month's BkkSci. Stuart's current research is on the intersection of data science and artificial intelligence (AI) that is often branded as "Big Data." These systems collect massive, diverse, and complex data sets, and then use this data to teach computers how to identify patterns and make decisions. Stuart will talk about his work both in building these automated agents to support the production of knowledge, and in studying how these systems are changing how scientists, governments, businesses, and ordinary people like you and me come know the world.