Search-Lab
Model Extraction Attacks: An Emerging Threat to AI Systems

 

Model Extraction Attacks: An Emerging Threat to AI Systems

Creating LLM models is a resource-intensive and time-consuming task.

GPT-4 limitations

 

GPT-4 limitations

GPT-4 thinks lightning fast. But only if it knows where to go.

It's an ERC20 token, so it's secure, right?

Richard Kovacs

It's an ERC20 token, so it's secure, right?

ERC20 and BEP20 are well known token standards in the cryptocurrency world, but they don’t tie the developer’s hand as much as we might think. I created a token that works a bit differently than you would expect.

To initialize or not to initialize - the dirty pipe vulnerability

Gergely Eberhardt

To initialize or not to initialize - the dirty pipe vulnerability

Around February 2022, an innocent-looking Linux kernel vulnerability corrupted some log files. Digging in and analyzing the root causes led to discovering the dirty pipe vulnerability. This allows attackers with local access to escalate to root. Oh no, was it an overflow again? Not this time; read on to find out!

Nobody is wrong, yet everyone knows something is wrong

Attila Szasz

Nobody is wrong, yet everyone knows something is wrong

Every once in a while, there is that stupid one-liner implementation bug that can be found in all critical systems, and that fancy exploitation technique that nobody has thought of in the past century, which results in a security vulnerability that not only disrupts the whole internet, but all hell breaks loose for cybersecurity professionals, IT admins and developers alike. The Log4Shell vulnerability is not one of those. Even though the problem is more severe than that.

Injection defenses

Daniel Szpisjak

Injection defenses

Injection defenses rely on making your code aware of the data structure it manipulates. If it is done well, your data structure internals are exposed just enough, so it is possible to hide them completely. Taking this approach will lead you to think of interfaces as security contracts.