Hi. I’m Damien, a contributor to the project currently working on the pydofo library that’s meant to replace Calibre’s PoDoFo binding.
Upon interacting with upstream to file a bug report, I found out that the developer and maintainer of the library is experimenting with using Copilot to author PRs. Per cgranade’s taxonomy, this means that unless the maintainer changes his(?) mind, the project is on track to become AI-vulnerable.
So what am I to do about it ?
One one hand, this seems very much contrary to the purpose and the ethos of the rereading project. On the other hand, the landscape looks grim and complete isolation from AI dependencies looks like an increasingly hard problem, the situation may repeat for any dependency at any point in time.
What do you think should be the course of action here ?
Update: the exchange on the github issue went a bit further and the maintainer clarified its position and “experiments”. https://github.com/podofo/podofo/issues/318#issuecomment-3967036898
In this case it would have been an experiment only. As long as I am the maintainer, I guarantee this library will be free of AI slop. […]
I’m not intellectually satisfied by the response, but I guess that exchange can be taken at face value and provides some sort of policy for AI contributions to the library.
The issue of having a policy for handling such cases in the rereading Project still persists.
