Tuesday, September 24

    There are a couple of reasons—or more appropriately, triggers—for launching braininabox. The first is plain intellectual curiosity

    Ever since LLMs burst onto the scene and started achieving feats that blew the Turing Test out of the water, intrigue has been building up. Have we cracked the algorithm for intelligence? Is gradient descent it? Can we finally have machines that can figure out different things as opposed to a specific thing? And then the more philosophical questions arose around – Have we created a mind? Is consciousness nothing but scaled computation?

    These questions genuinely interest me. They would interest me even if the current AI hype curve wasn’t at its peak. They would interest me even if some of these questions had little practical or monetary value. The curiosity is genuine.

    The other reason is more damning (of the human condition)—it’s frustration. The AI hype curve, like others before it, has unleashed a proliferation of opinion by the people least qualified to opine about it—which should be about 99% of the world’s population.

    Don’t get me wrong—I am not saying that lay people should not have an opinion. Opinioning is intrinsic to what we are as humans.

    But there are opinions and then there are opinions…

    A lay person’s opinion, I feel, should exhibit two attributes: 1) humility owing to the lack of expertise, and 2) curiosity.

    The opinion should be a mechanism to learn, not an excuse to pontificate on social media. As Naval Ravikant puts it—they should be strong but loosely held, ready to be let go when the evidence suggests otherwise.  

    We live in a world of perspective fundamentalism, which has driven down the value of an opinion to its lowest ever. I find it hard to accept the collective smugness that we so brazenly display about understanding an extremely esoteric field. 

    How can we afford to have an opinion on AI and its impact without gaining expertise in probability theory, statistical methods, machine learning algorithms, computational mathematics, distributed systems—and related concepts of interpretability and alignment? If we haven’t studied intelligence conceptually, mechanistically and scientifically—what gives us the right to have a strong opinion on artificial intelligence?

    So—intellectual curiosity and frustration it is. And I’d be lying if I don’t accept that the latter is the bigger motivator for this portal. 

    BrainInaBox will track the progress of AI on the road to AGI. I’m Venkat—not an AI researcher yet—but my couple of decades of experience in IT engineering enables me to go down the AI tech rabbit hole. I’m also investing time relearning the underlying math. My goal is to convey news, explain concepts and share opinions. I will of course hold myself accountable for my opinions and not hesitate to call my own BS if evidence warrants it.  

    For the reader or subscriber, I hope to stroke interest, expand awareness and spark educated debates on AI impact and safety research.

    Comments are closed.