已经两天没写日记了,因为这两天联考了两场试,Comp 579 和 MATH 463,一直忙着复习,没有做什么日记。总结一下这两天吧。

一直忙着复习。在第一天考 COMP 579 的时候,本来觉得自己准备得还挺好的,都不准备怎么复习了,到了图书馆跟同学交流了一下之后,感觉最后两章,我当时想的就是不复习了,到时候考了算我倒霉,但是最后还是硬给它复习了一下。考试的时候确实考了一个大题,虽然尝试写了应该也没写对,但是复习了总是有用的吧。考出来特别开心,尝试跟别人对答案,但是对的时候发现错的很多。有点理解当时初中的时候大家互相对答案的心情了,当时我初中的时候真的不理解为什么要互相对答案,对了徒增烦恼。现在我感觉对答案是一种真的很想印证自己学到的知识在题里的表现是否正确的感觉,有种学习的乐趣。但可惜我的学习生涯好像就要结束了。

第二天考数学,也就是今天,特别累。579 考完之后,就回来想复习,但是太累了,先玩了好久的游戏。晚上 10 点开始复习,复习到 1 点多睡觉。今天起来,去教室外面开始复习。上考场之后,感觉复习的知识都忘了。但是还是努力写,把所有相关的定理和我觉得对的推导全都写下来,能给多少分给多少分吧。虽然这课占比 40% 的期中,但是 55% 就能得到 C 减。假设这个考试完全没得分,还是有机会过的。大不了就是不拿这个 Minor Math 的学位。但是我相信我写的内容肯定有点分,有点分这个课就能过。希望能过。

然后今天下午去 office hour,遇见了那个 550 认识的同学,我们就在聊这些 AI 相关的发展。终于把我的观点找一个人说了。我的观点一直就是:AI 为什么现在是 LLM 代言 AI?因为 LLM 真的能给任何人一个成就感,打破了行业和专业的壁垒,让任何人能使用上这个 AI,任何人都能去体验它一个字一个字蹦出来的那种震撼感,有一种电脑真的活起来了的感觉。它降低了任何人使用这种高科技产品的门槛。同样的,OpenClaw 跟 LLM 也是同样的道理。OpenClaw 把 LLM 的使用难度再一次地降低了,它真的让你可以通过一句话干一些事情,让人有极大的满足感。这个付出和它精神上的收益是不对等的,所以才会让人这么上瘾,才会有这么一个爆款的出现。

但是同样的,我觉得 AI 真的让人变得懒、变笨了很多。我们有一句话叫如何毁掉一个人,那就是让他一直玩、不想学习。而现在我觉得 AI 真的就在做这件事情。我在当 COMP 202 的 TA,我就发现这个事情非常明显。以前在 midterm 一之前,我们能达到 200 多条 post。但现在我们已经期中结束了,相当于这个学期已经过了一半,才刚到 270 多条 post。post 的减少和 AI 能力的提升是一个正相关的关系。同时我发现现在很多人已经在 Assignment 3 的情况下,还不会写最基础的语法和整个编程的逻辑,一点感觉都没有,完全给出来一道题完全不知道怎么做。这是以前不可能出现的问题。因为现在基础的简单的代码,不管你怎么防御是没有意义的,AI 一定能帮你写出来。那么既然有这个动动手就能解决、还免费的事情,为什么要去花心思去学?为什么要去找 TA?这已经是在大学中出现的问题。大学都是那些真正愿意学的人来学的课,还有这种问题的出现,我认为是非常严重的,它真的可能让下一代的人只会用 AI 而不会真正的学习。

当然我要反思我自己,我自己也有这样的问题,在 579 的课的作业上,我也已经几乎是全 AI 来进行写作了,当然我最后还是会进行重新的审核和理解所有的内容,但是我承认这个知识点掌握肯定是没有以前在上 Comp 252 时掌握的深。我就在想要不要把这个做成一个视频——就是实际上有一个更高级的文明,为了遏制地球的文明,使用了一种比质子更温和而无形的方法,就是让人们先发现了 LLM 这个东西。当 LLM 被发现了之后,人就会在不知不觉中变得越来越懒,从而科技的进步会变慢。我真的觉得科技的进步是无法靠 LLM 来实现的,因为科技的进步需要的是创造力,而 LLM 只能遵循本质来预测,而不能创造。当然,这个可能需要更多的实践或者说更多的研究才能进行定义,我们现在也不知道 LLM 是否真的即使在预测的情况下也能进行创造。

今天下午又跟一个同学讲了 COMP 202 的作业,就是我说的那个问题。一个皮肤稍微微黑一些的人,不知道是亚洲人还是哪里人,完全对语法一点概念都没有。第一题他就不会写,我说你从 string 里取 index,他都不知道怎么取 index。他前面两个作业怎么做的?很明显就是 AI 做的。一点办法也没有,我还是辅导了他一个小时。实际上他来的时候已经是 3 点半了,我的 office hour 只剩半个小时了,我还是给他辅导到 4 点半。

我出去接水的时候还遇见了 Faten,他们做 midterm exam 的复习,Faten 也来参与了一下。我回去之后,另外一个经常来的中国同学也来了,是一年级的。我也跟他聊了很多。我说,来大学也不是用来学习的,而是用这边的人脉、机会和资源去朝你的目标努力的。如果你想去读研,那么大学里你上课学的那些东西根本不够用。如果你想找工作,那么上课学的这些东西也根本没有用。我建议他去看来 offer 的那个讲座和他们的路线图。虽然他们让所有人都去转码,有点不道德,但是这个路线图还是非常准确和有意义的。其次,我也邀请他来一起加入我这个 skip lecture 的项目,他也很开心地答应了,但是他目前有一个 Hackathon,不知道他这个进度会怎么样。这两天就给大家发一些视频吧。

五六点回到家之后,把电脑一接到投影仪上,就开始玩杀戮尖塔,还有一个打字的游戏。那个打字游戏可能真的会练习一下我的盲打,像一个肉鸽类型的,击败敌人的游戏。比较有趣,可以尝试多练一练。但是它没有分阶段的练习,一上来就是全键盘的练习,可能学习曲线比较陡峭。

在晚上,我修了一下 Windows 电脑不停重启的问题。同时在打游戏的同时,我还研究了一下值不值得买几个 Mac,组成一个集群去做本地的模型。看了一下,本地的模型本身质量不高,不满足我现在的开发需求。即使用了最高端的配置,用 4 台 Mac 配成一个集群去做推理,依然不能满足 200K 的上下文。200K 的上下文,我在 Claude Code 里经常用满。如果连 200K 都满足不了,为什么要本地推理呢?本地推理又慢,又要那么多本金,科技还会继续升级,现在买越买越吃亏。就想可能真的就是买一个稍微好一点的服务器,养个 OpenClaw 什么的,再研究吧。如果最近心情好,感觉有成绩的话,可能会买一个 Mac mini。先把一个 Mac mini 的标签贴到我的电脑上,鼓励一下自己。

今天一点多起床,一直在考虑要不要给华硕掌机装 OpenClaw。犹豫了很久,最终还是决定折腾一把。先研究该用哪个 Linux 系统,大家都推荐在这台掌机上用 Arch Linux,一方面教程多,另一方面灯光和掌机管理软件的适配也最好。安装过程挺硬核的,几乎全是命令行界面,要自己敲命令装系统。好在装完之后界面确实挺漂亮。然后我开始在这台掌机上安装 OpenClaw 和 Cloud Code,用 Cloud Code 帮我配置各种适配软件,再用 OpenClaw 逐步替代之前自己搭的工作流。现在主要的顾虑是:一方面比较耗 token,而且偶尔有点笨;另一方面不太敢把 Cloud Code 直接接到 OpenClaw,怕触发风控。目前先用免费的 CodeX 额度,感觉暂时还能撑一撑。也在 Telegram 上试它的效果,整体来说 Telegram 作为入口还不错,只是还需要多调教、多尝试不同模型,看看能不能找到更合适的组合。

It’s been two days since I last wrote — I’ve been buried in back-to-back midterms: COMP 579 and MATH 463.

For the COMP 579 exam, I initially thought I was prepared enough that I barely needed to review. Then I ran into a classmate at the library and realized there were two whole chapters I’d been planning to just skip — figured if they showed up, it was my own bad luck. But I ended up grinding through them anyway. Sure enough, one of those chapters was on the exam. I probably didn’t get it right, but at least I tried. Afterward I was in a great mood and wanted to compare answers with people — only to realize I’d gotten a lot wrong. It gave me a weird flashback to middle school, when everyone would compare answers right after a test. Back then I genuinely didn’t get it — why torture yourself? Now I understand. It’s that urge to verify whether what you actually learned actually held up. There’s a real joy in that. Too bad my student life feels like it’s almost over.

The second day — today — was the math exam, and I was exhausted. After COMP 579 I came home intending to study but ended up gaming for a long time first. Started reviewing around 10pm, went to bed past 1am. This morning I reviewed outside the classroom before going in. The moment I sat down, everything felt like it had evaporated from my brain. I just wrote down every theorem and derivation I thought was relevant and hoped for partial credit. This exam is worth 40% of the course grade, but a 55% is still a C-minus pass. Even if I bombed it entirely, there’s still a path through. Worst case, I don’t get the Minor in Math. But I have to believe I wrote something worth a few points. I hope I pass.

This afternoon at office hours I ran into a classmate I know from COMP 550. We ended up having a long conversation about AI. I finally got to say out loud what I’ve been thinking: why is LLM the face of AI right now? Because LLM gives everyone — anyone — a sense of accomplishment. It tears down the walls between industries and disciplines. Anyone can experience that thing where text just streams out word by word and feels alive. It makes computing feel genuinely magical. And it lowers the barrier to using this kind of technology to basically zero. Claude Code (what I’ve been calling OpenClaw) does the same thing at the next level — reduces the difficulty of using LLMs even further, lets you get things done with a single sentence. The effort-to-reward ratio is wildly asymmetric, which is exactly why it’s so addictive and why it blew up.

But at the same time, I really think AI is making people lazy and dumb. There’s a saying: if you want to ruin someone, just let them do nothing but play. I feel like AI is literally doing that now. I’m a TA for COMP 202, and I see it clearly. Before the first midterm in previous semesters, we’d have 200+ posts on the course forum. Now we’re halfway through the semester and just crossed 270. The decline in posts directly tracks the rise in AI capability. And what’s worse — students in Assignment 3 still can’t write basic syntax or follow basic programming logic. Show them a problem and they have no idea where to start. This would have been unthinkable before. Now, for simple foundational code, no matter what safeguards you put in place, AI will just write it for them. If something’s free and takes zero effort, why would you bother learning? Why would you come to office hours? And this isn’t even a niche issue anymore — these are university students, people who theoretically chose to be here.

I should also hold a mirror up to myself. On COMP 579 assignments, I’ve basically been using AI for almost everything too. I do review and re-understand all the content afterward — but I’ll admit, my grasp of the material is nowhere near as deep as when I was taking COMP 252. I keep thinking about turning this into a video: what if some advanced civilization, to suppress Earth’s technological progress, found something gentler and more invisible than a proton — and just let us discover LLMs? Once we found LLMs, we’d slowly, imperceptibly become lazier, and technological progress would stall. I genuinely believe real technological advancement can’t come from LLMs alone, because progress requires creativity — and LLMs are designed to predict based on patterns, not to create. Though I suppose this needs more research before you can say it definitively. We don’t actually know yet whether LLMs, even through prediction, can generate genuine novelty.

Later this afternoon I helped another student with COMP 202. A slightly darker-skinned guy — couldn’t tell if he was Asian or from somewhere else — who had absolutely no sense of syntax at all. Couldn’t write a single line on the first problem. I told him to pull a character from a string by index and he didn’t know how to do it. How did he get through the first two assignments? Obviously AI. Nothing I could do about that — but I still spent an hour with him. He showed up at 3:30pm, with only thirty minutes left in my office hours, and I still stayed until 4:30.

When I went to refill my water bottle I ran into Faten, who had come to join their midterm review session. Back in the office, another regular — a first-year Chinese student — came by, and we had a long talk. I told him: you don’t come to university to learn. You come to leverage the network, the opportunities, and the resources here to chase whatever you actually want. If you want to go to grad school, what you learn in class isn’t going to cut it. If you want to find a job, same story. I recommended he look at that “how to land an offer” talk and their roadmap. It’s a bit morally questionable that they basically tell everyone to pivot to CS — but the roadmap itself is genuinely useful. I also invited him to join my skip-lecture project. He said yes happily, though he’s in the middle of a Hackathon right now, so we’ll see how that goes. I’ll send some videos to the group over the next few days.

Got home around 5 or 6, plugged into the projector, and played Slay the Spire plus a typing game. The typing game might actually help my touch-typing — it’s roguelike-style, you defeat enemies by typing. Pretty fun, worth practicing more. The only thing is it throws you straight into full-keyboard mode with no phased learning, so the curve is steep at the start.

In the evening I fixed the issue with my Windows machine rebooting endlessly. While gaming I also looked into whether it made sense to buy a few Macs and set up a local model cluster. Verdict: not worth it. Local models just aren’t good enough for my development needs. Even with the highest-end configuration — four Macs running inference together — it still can’t handle a 200K context window. I hit that limit all the time in Claude Code. If it can’t even do 200K, why bother with local inference? It’s slow, requires serious upfront capital, and the technology keeps advancing — buying now means buying at a loss. I’m thinking maybe just get a slightly better server and self-host Claude Code or something. Still figuring it out. If things go well and I get some good results soon, maybe I’ll pick up a Mac mini. For now, I’ll stick a Mac mini price tag on my computer as motivation.

Today I got up around 1 a.m. and spent a long time debating whether to install OpenClaw on my Asus handheld. After hesitating for quite a while, I finally decided to go for it. First I had to choose a Linux distro. Most people recommend Arch Linux on this device — there are more tutorials, and the RGB lighting plus handheld management tools are best supported there. The installation process was pretty hardcore: almost entirely command-line, manually typing commands to install the system. The silver lining is that once everything was done, the desktop actually looked very nice. After that I started installing OpenClaw and Claude Code on the handheld. I used Claude Code to help set up various supporting tools, and then used OpenClaw to gradually replace the workflows I had previously cobbled together myself. My main concerns right now are: on one hand, the setup burns through a lot of tokens and can still be a bit dumb at times; on the other hand, I’m wary of wiring Claude Code directly into OpenClaw in case it triggers some kind of account risk control. For now I’m just relying on the remaining free CodeX quota — it’s still usable for the moment. I’m also testing everything through Telegram, which actually feels like a decent entry point. It just needs more tweaking and experimentation with different models to see if I can find a combination that really fits.