Trending

#libcurl

Latest posts tagged with #libcurl on Bluesky

Latest Top
Trending

Posts tagged #libcurl

📅 Join us at the #curl distro meeting on March 26!
#EmbeddedSystems daniel:// stenberg:// #libcurl

0 0 0 0
Post image

10K curl downloads per year The Linux Foundation, the organization that we want to love but that so often makes that a hard bargain, has created something they call “Insights” where they gather...

#cURL #and #libcurl

Origin | Interest | Match

0 0 0 0
Post image

Current weather in Kwamalasemutu SR SA

>> log

$ curl --verbose wttr.in/kwamalasemutu|lolcat
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 5.9.243 […]

[Original post on mastodon.bsd.cafe]

1 6 0 0
Original post on mastodon.bsd.cafe

I am not advising you to use only everything curl online to get to know curl.

I often work on airgapped servers (they only seen LAN segments) where the man pages of all commands are crucial to have locally.

Use the site as an **addition** to the manpages.

#curl #libcurl #programming […]

1 0 0 0
Post image Post image

curl libcurl
curl is the Swiss Army Knife of fetching programs

curl follows the UNIX principle, it does one thing, and it does it very good & curl has been doing it for decades.

If you want to know everything that curl does there are Man Pages. The man […]

[Original post on mastodon.bsd.cafe]

0 2 0 0

#curl #libcurl #URLTransfer #opensource daniel:// stenberg://

1 0 0 0
Preview
The cURL Project Drops Bug Bounties Due To AI Slop Over the past years, the author of the cURL project, [Daniel Stenberg], has repeatedly complained about the increasingly poor quality of bug reports filed due to LLM chatbot-induced confabulations, also …read more

The cURL Project Drops Bug Bounties Due To AI Slop Over the past years, the author of the cURL project, [Daniel Stenberg], has repeatedly complained about the increasingly poor quality of bug repor...

#Artificial #Intelligence #bug #bounty #libcurl #LLM

Origin | Interest | Match

0 0 0 0
Preview
The cURL Project Drops Bug Bounties Due To AI Slop Over the past years, the author of the cURL project, [Daniel Stenberg], has repeatedly complained about the increasingly poor quality of bug reports filed due to LLM chatbot-induced confabulations, also …read more

The cURL Project Drops Bug Bounties Due To AI Slop Over the past years, the author of the cURL project, [Daniel Stenberg], has repeatedly complained about the increasingly poor quality of bug repor...

#Artificial #Intelligence #bug #bounty #libcurl #LLM

Origin | Interest | Match

0 0 0 0
Preview
The cURL Project Drops Bug Bounties Due To AI Slop Over the past years, the author of the cURL project, Daniel Stenberg], has repeatedly complained about the increasingly poor quality of bug reports filed due to LLM chatbot-induced confabulations, also […read more
0 1 0 0

#libcurl grew with a mere 100 lines of code in 2025. At 149,000 lines.

5 1 0 0
libcurl - source code examples

If you have an idea for a <500 line stand-alone example C code using #libcurl, tell us!

https://curl.se/libcurl/c/example.html

2 2 0 0
curl in the middle, lots of boxes around it explaining the backends and the third party libraries that power them

curl in the middle, lots of boxes around it explaining the backends and the third party libraries that power them

#libcurl backends, the November 2025 update

5 1 0 0

#OpenSSL, #OpenPGP oder #libcurl – ein Protokoll zur sicheren Datenübertragung, ein Verschlüsselungsstandard, eine Programmbibliothek – offene digitale Standards wie diese sind bedroht, warnen die Studienautoren, zum einen von Big-Tech-Konzernen, zum anderen von autoritären politischen Regimen. 👇

4 0 1 0
Preview
Eighteen years of ABI stability Exactly eighteen years ago today, on October 30 2006, we shipped curl 7.16.0 that among a whole slew of new features and set of bugfixes bumped the libcurl SONAME number from 3 to 4. ## ABI breakage This bump meant that libcurl 7.16.0 was not binary compatible with the previous releases. Users could not just easily and transparently bump up to this version from the previous, but they had to check their use of libcurl and in some cases adjust source code. This was not the first ABI breakage in the curl project, but at this time our use base was larger than at any of the previous bumps and this time people complained about the pains and agonies such a break brought them. ## We took away FTP features In the 7.16.0 release we removed a few FTP related features and their associated options. Before this release, you could use curl to do “third party” transfers over FTP, and in this release you could no longer do that. That is a feature when the client (curl) connects to server A and instructs that server to communicate with server B and do file transfers among themselves, without sending data to and from the client. This is an FTP feature that was not implemented well in curl and it was poorly tested. It was also a feature that barely no FTP server allowed and subsequently this was not used by many users. We ripped it out. ## A near pitchfork situation Because so few people used the removed features, barely anyone actually noticed the ABI breakage. It remained theoretical to most users and I believe that detail only made people more upset over the SONAME bump because they did not even see the necessity: we just made their lives more complicated for no benefit (to them). The Debian project even decided to override our decision _“no, that is not an ABI breakage”_ and added a local patch in their build that lowered the SONAME number back to 3 again in their builds. A patch they would stick to for many years to come. The obvious friction this bump caused, even when in reality it actually did not affect many users and the loud feedback we received, made a huge impact on me. It had not previously dawned on me exactly how important this was. I decided there and then to do the utmost to never go through this again. To put ABI compatibility at the top of the priority list. Make it one of the most fundamental key properties of libcurl. **Do. Not. Break. The. ABI** (we don’t break the API either) ## A never-breaking ABI The decision was initially made to avoid the negativity the bump brought, but I have since over time much more come to appreciate the upsides. _Application authors everywhere can always and without risk keep upgrading to the latest libcurl._ It sounds easy and simple, but the impact is huge. The examples, the documentation, the applications, everything can just always upgrade and continue. As libcurl over time has become even more popular and compared to 2006, used in many magnitudes more installations, it has grown into an even more important aspect of the curl life. Possibly _the_ single most important properly of curl. There is a small caveat here and that is that we occasionally of course have bugs and regressions, so when I say that users can always upgrade, that is true in the sense that we have not broken the ABI since. We have however had a few regressions that sometimes have triggered some users to downgrade again or wait a little longer for the next release that has the bug fixed. When we took that decision in 2006 we had less than 50,000 lines of product code. Today we are approaching 180,000 lines. ## Effects of never breaking ABI We know that once we adopt a change, we are stuck with it for decades to come. It makes us double-check every knot before we accept new changes. Once accepted and shipped, we keep supporting code and features that we otherwise could have reconsidered and perhaps removed. Sometimes we think of a better way to do something _after_ the initial merge, but by then it is too late to change. We can then always introduce new and better ways to do things, but we have to keep supporting the old way as well. A most fundamental effect is that we can never shrink the list of options we support. We can never actually rename something. Doing new things and features consistently over this long time is hard if not impossible, as we learn new things and paradigms vary through the decades. ## How The primary way we maintain this is by manual code view and code inspection of every change. Followed of course by a large range of tests that make sure that assumptions remain. Occasionally we have (long) discussions around subtle details when someone proposes a change that potentially might be considered an ABI break. Or not. What exactly is covered by _ABI compatibility_ is not always straight forward or easy to have carved in stone. In particular since the project can be built and run on such a wide range of systems and architectures. ## Deprecating We _can_ still remove functionality if the conditions are right. Some features and options are documented and work in a way so that something is _requested_ or _asked for_ and libcurl then tries to satisfy that ask. Like for example libcurl once supported HTTP/1 pipelining like that. libcurl still provides the option to enable pipelining and applications can still ask for it so it is still ABI and API compatible, but a modern libcurl simply will never do it because that functionality has been removed. Example two: we dropped support for NPN a few years back. NPN being a TLS extension called Next Protocol Negotiation that was used briefly in the early days of HTTP/2 development before ALPN was introduced and replaced NPN. Virtually nothing requires NPN anymore, and users can still set the option asking for it, but it will never actually happen over the wire. Furthermore, a typical libcurl build involves multiple third party libraries that provide features it needs. For things like TLS, SSH, compression and binary HTTP protocol management. Over the years, we have removed support for several such libraries and introduced support for new, in ways that was never visible in the API or ABI. Some users just had to switch to building curl with different helper libraries. In reality, libcurl is typically more stable than most existing servers and URLs. The libcurl examples you wrote in 2006 can still be built with the modern libcurl, but the servers and URLs you used back then most probably cannot be used anymore. ## If no one can spot it, it did not happen As blunt as it may sound, it has came down to this fundamental statement several times to judge if a change is an ABI breakage or not: _If no one can spot an ABI change, it is not an ABI change_ Of course what makes it harder than it sounds is that it is extremely difficult to actually know if someone will notice something ahead of time. libcurl is used in so ridiculously many installations and different setups, second-guessing whatever everyone does and wants is darned close to impossible. Adding to the challenge is the crazy long upgrade cycles some of our users seem to sport. It is not unusual to see questions appear on the mailing lists from users bumping from curl versions from eight or ten years ago. The fact that we have not heard users comment on a particular change might just mean that they are still stuck on ancient versions. Getting frustrated comments from users today about a change we landed five years ago is hard to handle. ## Forwards compatible I should emphasize that all this means that users can always upgrade to a _later_ release. It does not necessarily mean that they can switch back to an older version without problems. We do add new features over time and if you start using a new feature, the application of course will not work, or even still compile, if you would switch to a libcurl version from before that feature was added. ## How long is never What I have laid out here is our plan and ambition. We have managed to stick to this for eighteen years now and there is no particular known blockers in the known future either. I cannot rule out that we might at some point in the future run into an obstacle so huge or complicated that we will be forced to do the unthinkable. To break the ABI. But until we see absolutely no other way forward, it is not going to happen.

On this day last year, #libcurl celebrated its 18th anniversary of not breaking the ABI.

That makes it 19 years now.

daniel.haxx.se/blog/2024/10/30/eighteen...

7 6 1 0
Post image

On 110 operating systems In November 2022, after I had been keeping track and adding names to this slide for a few years already, we could boast about curl having run on 89 different operating syst...

#cURL #and #libcurl

Origin | Interest | Match

0 0 0 0
Preview
A new breed of analyzers (See how I cleverly did not mention AI in the title!) You know we have seen more than our fair share of slop reports sent to the curl project so it seems only fair that I also write something about the state of AI when we get to enjoy some positive aspects of this technology. Let’s try doing this in a chronological order. ## The magnitude of things curl is almost 180,000 lines of C89 code, excluding blank lines. About 637,000 words in C and H files. To compare, the original novel War and Peace (a _thick_ book) consisted of 587,000 words. The first ideas and traces for curl originated in the httpget project, started in late 1996. Meaning that there is a lot of history and legacy here. curl does network transfers for 28 URL schemes, it has run on over 100 operating systems and on almost 30 CPU architectures. It builds with a wide selection of optional third party libraries. We have shipped over 270 curl releases for which we have documented a total of over 12,500 bugfixes. More than 1,400 humans have contributed with commits merged into the repository, over 3,500 humans are thanked for having helped out. It is a very actively developed project. ## It started with sleep On August 11, 2025 there was a curl vulnerability reported against curl that would turn out legitimate and it would later be published as CVE-2025-9086. The reporter of this was the Google Big Sleep team. A team that claims they use “an AI agent developed by Google DeepMind and Google Project Zero, that actively searches and finds unknown security vulnerabilities in software”. This was the first ever report we have received that seems to have used AI to accurately spot and report a security problem in curl. Of course, we don’t know how much AI and how much human that were involved in the research and the report. The entire reporting process felt very human. ## krb5-ftp In mid September 2025 we got new a security vulnerability reported against curl from a security researcher we had not been in contact with before. The report which accurately identified a problem, was not turned into a CVE only because of sheer luck: the code didn’t work for other reasons so the vulnerability couldn’t actually be reached. As a direct result of this lesson, we ripped out support for krb5-ftp. ## ZeroPath The reporter of the krb5-ftp problem is called Joshua Rogers. He contacted us and graciously forwarded us a huge list of more potential issues that he had extracted. As I understand it, mostly done with the help of ZeroPath. A code analyzer with AI powers. In the curl project we continuously run compilers with maximum pickiness enabled and we though scan-build, clang-tidy, CodeSonar, Coverity, CodeQL and OSS-Fuzz at it and we always address and fix every warning and complaint they report so it was a little surprising that this tool now suddenly could produce over _two hundred_ new potential problems. But it sure did. And it was only the beginning. ## At three there is a pattern As we started to plow through the huge list of issues from Joshua, we received yet another security report against curl. This time by Stanislav Fort from Aisle (using their own AI powered tooling and pipeline for code analysis). Getting security reports is not uncommon for us, we tend to get 2 -3 every week, but on September 23 we got another one we could confirm was a real vulnerability. Again, an AI powered analysis tool had been used. (At the time I write this blog entry, this particular issue has not been disclosed yet so I can’t link it.) ## A shift in the wind As I was amazed by the quality and insights in some of the issues in Joshua’s initial list he sent over I tooted about it on Mastodon, which later was picked up by Hacker news, The Register, Elektroniktidningen and more. These new reported issues feel quite similar in nature to defects reported by code analyzers typically do: small mistakes, omissions, flaws, bugs. Most of them are just plain variable mixups, return code confusions, small memory leaks in weird situations, state transition mistakes and variable type conversions possibly leading to problems etc. Remarkably few of them complete false positives. The quality of the reports make it feel like a new generation of issue identification. Like in this ladder of tool evolution from the old days. Each new step has taken the notch up a level: 1. At some point I think starting in the early 2000s, the C compilers got better at actually warning and detecting many mistakes they just silently allowed back in the dark ages 2. Then the code analyzers took us from there to the next level and found more mistakes in the code. 3. We added fuzzing to the mix in the mid 2010s and found a whole slew of problems we never realized before we had. 4. Now this new breed, almost like a new category, of analyzers that seem to connect the dots better and see patterns previous tools and analyzers have not been able to. And tell us about the discrepancies. ## 25% something Out of that initial list, we merged about 50 separately identifiable bugfixes. The rest were some false positives but also lots of minor issues that we just didn’t think were worth poking at or we didn’t quite agree with. ## A minor tsunami We (primarily Stefan Eissing and myself) worked hard to get through that initial list from Joshua within only a couple of days. A list we mistakenly thought was “it”. Joshua then spiced things up for us by immediately delivering a _second_ list with 47 additional issues. Follow by a third list with yet another 158 additional potential problems. At the same time Stanislav did the similar thing and delivered to us two lists with a total of around twenty possible issues. Don’t take me wrong. This is good. The issues are of high quality and even the ones we dismiss often have some insights and the rate of obvious false positive has remained low and quite manageable. Every bug we find and fix makes curl better. Every fix improves a software that impacts and empowers a huge portion of the world. The total amount of suspected issues submitted by these two gentlemen are now at over _four hundred_. A fair pile of work for us curl maintainers! Because how these reported issues _might_ include security sensitive problems, we have decided to not publish them but limit access to the reporters and the curl security team. As I write this, we are still working our way through these reports but it feels reasonable to assume that we will get even more soon… ## All code An obvious and powerful benefit this tool seems to have compared to others is that it scans _all_ source code without having a build. That means it can detect problems in all backends used in all build combinations. _Old style_ code analyzers require a proper build to analyze and since you can build curl in countless combinations with a myriad of backend setups (where several are architecture or OS specific), it is literally impossible to have all code analyzed with such tools. Also, these tools can inject (parts of) third party libraries as well and find issues in the borderland between curl and its dependencies. I think this is one primary reason it found so many issues: it checked lots of code barely any other analyzers have investigated. ## A few examples To illustrate the level of “smartness” in this tool, allow me to show a few examples that I think shows it off. These are issues reported against curl in the last few weeks and they have all been fixed. Beware that you might have to understand a thing or two about what curl does to properly follow here. ### A function header comment was wrong It correctly spotted that the documentation in the function header incorrectly said an argument is optional when in reality it isn’t. The fix was to correct the comment. # `Curl_resolv`: NULL out-parameter dereference of `*entry` * **Evidence:** `lib/hostip.c`. API promise: "returns a pointer to the entry in the `entry` argument (**if one is provided**)." However, code contains unconditional writes: `*entry = dns;` or `*entry = NULL;`. * **Rationale:** The API allows `entry == NULL`, but the implementation dereferences it on every exit path, causing an immediate crash if a caller passes `NULL`. I could add that the fact that it takes comments so seriously can also trick it to report wrong things when the comments are outdated and state bad “facts”. Which of course shouldn’t happen because comments should not lie! ### code breaks the telnet protocol It figured out that a piece of telnet code actually wouldn’t comply with the telnet protocol and pointed it out. Quite impressively I might add. **Telnet subnegotiation writes unescaped user-controlled values (tn->subopt_ttype, tn->subopt_xdisploc, tn->telnet_vars) into temp (lines 948–989) without escaping IAC (0xFF)** In lib/telnet.c (lines 948–989) the code formats Telnet subnegotiation payloads into temp using msnprintf and inserts the user-controllable values tn->subopt_ttype (lines 948–951), tn->subopt_xdisploc (lines 960–963), and v->data from tn->telnet_vars (lines 976–989) directly into the suboption data. The buffer temp is then written to the socket with swrite (lines 951, 963, 995) without duplicating CURL_IAC (0xFF) bytes. Telnet requires any IAC byte inside subnegotiation data to be escaped by doubling; because these values are not escaped, an 0xFF byte in any of them will be interpreted as an IAC command and can break the subnegotiation stream and cause protocol errors or malfunction. ### no TFTP address pinning Another case where it seems to know the best-practice for a TFTP implementation (pinning the used IP address for the duration of the transfer) and it detected that curl didn’t apply this best-practice in code so it correctly complained: No TFTP peer/TID validation The TFTP receive handler updates state->remote_addr from recvfrom() on every datagram and does not validate that incoming packets come from the previously established server address/port (transfer ID). As a result, any host able to send UDP packets to the client (e.g., on-path attacker or local network adversary) can inject a DATA/OACK/ERROR packet with the expected next block number. The client will accept the payload (Curl_client_write), ACK it, and switch subsequent communication to the attacker’s address, allowing content injection or session hijack. Correct TFTP behavior is to bind to the first server TID and ignore, or error out on, packets from other TIDs. ### memory leaks no one else reported Most memory leaks are reported when someone runs code and notices that not everything is freed in some specific circumstance. We of course test for leaks all the time in tests, but in order to see them in a test we need to run that exact case and there are many code paths that are hard to travel in tests. Apart from doing tests you can of course find leaks by manually reviewing code, but history and experience tell us that is an error-prone method. # GSSAPI security message: leaked `output_token` on invalid token length * **Evidence:** `lib/vauth/krb5_gssapi.c:205--207`. Short quote: ```c if(output_token.length != 4) { ... return CURLE_BAD_CONTENT_ENCODING; } ``` The `gss_release_buffer(&unused_status, &output_token);` call occurs later at line 215, so this early return leaks the buffer from `gss_unwrap`. * **Rationale:** Reachable with a malicious peer sending a not-4-byte security message; repeated handshakes can cause unbounded heap growth (DoS). This particular bug looks straight forward and in hindsight easy enough to spot, but it has existed like this in plain sight in code for _over a decade_. ## More evolution than revolution I think I maybe shocked some people when I stated that the AI tooling helped us find 22, 70 and then a 100 bugs etc. I suspect people in general are not aware of and does not think about what kind of bugfix frequency we work on in this project. _Fixing several hundred bugs per release is a normal rate for us._ Sure, this cycle we will probably reach a new record, but I still don’t grasp for breath because of this. I don’t consider this new tooling a _revolution_. It does not massively or drastically change code or how we approach development. It is however an excellent new project assistant. A powerful tool that highlights code areas that need more attention. A much appreciated evolutionary step. I might of course be speaking too early. Perhaps it will develop a lot more and it can then turn into a revolution. ## Ethical and moral decisions The AI engines burn the forests and they are built by ingesting other people’s code and work. Is it morally and ethically right to use AI for improving Open Source in this way? It is a question to wrestle with and I’m sure the discussion will go on. At least this use of AI does not generate duplicates of someone else’s code for us to use, but it certainly takes lessons from and find patterns based on others’ code. But so do we all, I hope. ## Starting from a decent state I can imagine that curl is a pretty good source code to use a tool of this caliber on, as curl is old, mature and all the minor nits and defect have been polished away. It is a project where we have a high bar and we want to raise it even higher. We love the opportunity to get additional help and figure out where we might have slipped. Then fix those and try again. Over and over until the end of time. ## AIxCC At the DEF CON 33 conference which took place in August 2025, DARPA ran a competition called the AI Cyber Challenge or AIxCC for short. In this contest, the competing teams used AI tools to find artificially injected vulnerabilities in projects – with zero human intervention. One of the projects used in the finals that the teams looked for problems in, was… curl! I have been promised a report or a list of findings from that exercise, as presumably the teams found something more than just the fake inserted problems. I will report back when that happens. ## Going forward We do not yet have any AI powered code analyzer in our CI setup, but I am looking forward to adding such. Maybe several. We _can_ ask GitHub copilot for pull-request reviews but from the little I’ve tried copilot for reviews it is far from comparable to the reports I have received from Joshua and Stanislav, and quite frankly it has been mostly underwhelming. We do not use it. Of course, that can change and it might turn into a powerful tool one day. We now have an established constructive communication setup with both these reporters, which should enable a solid foundation for us to improve curl even more going forward. I personally still do not use any AI at all during development – apart from occasional small experiments. Partly because they all seem to force me into using VS code and I totally lose all my productivity with that. Partly because I’ve not found it very productive in my experiments. Interestingly, this productive AI development happens pretty much concurrently with the AI slop avalanche we also see, proving that one AI is not necessarily like the other AI.

A new breed of analyzers (See how I cleverly did not mention AI in the title!) You know we have seen more than our fair share of slop reports sent to the curl project so it seems only fair that I a...

#cURL #and #libcurl #AI #Development #source #code

Origin | Interest | Match

0 0 0 0

welcome to 100 public functions in the #libcurl API day

3 1 0 0
Preview
libcurl gets a URL API libcurl has done internet transfers specified as URLs for a long time, but the URLs you’d tell libcurl to use would always just get parsed and used internally. Applications that pass in URLs to libcurl would of course still very often need to parse URLs, create URLs or otherwise handle them, but libcurl has not been helping with that. At the same time, the under-specification of URLs has led to a situation where there’s really no stable document anywhere describing how URLs are supposed to work and basically every implementer is left to handle the WHATWG URL spec, RFC 3986 and the world in between all by themselves. Understanding how their URL parsing libraries, libcurl, other tools and their favorite browsers differ is complicated. By offering applications access to libcurl’s own URL parser, we hope to tighten a problematic vulnerable area for applications where the URL parser library would believe one thing and libcurl another. This could and has sometimes lead to security problems. (See for example Exploiting URL Parser in Trending Programming Languages! by Orange Tsai) Additionally, since libcurl deals with URLs and virtually every application using libcurl already does some amount of URL fiddling, it makes sense to offer it in the “same package”. In the curl user survey 2018, more than 40% of the users said they’d use an URL API in libcurl if it had one. ## Handle based Create a handle, operate on the handle and then cleanup the handle when you’re done with it. A pattern that is familiar to existing users of libcurl. So first you just make the handle. /* create a handle */ CURLU *h = curl_url(); ## Parse a URL Give the handle a full URL. /* "set" a URL in the handle */ curl_url_set(h, CURLUPART_URL, "https://example.com/path?q=name", 0); If the parser finds a problem with the given URL it returns an error code detailing the error. The flags argument (the zero in the function call above) allows the user to tweak some parsing behaviors. It is a bitmask and all the bits are explained in the curl_url_set() man page. A parsed URL gets split into its components, parts, and each such part can be individually retrieved or updated. ## Get a URL part Get a separate part from the URL by asking for it. This example gets the host name: /* extract host from the URL */ char *host; curl_url_get(h, CURLUPART_HOST, &host, 0); /* use it, then free it */ curl_free(host); As the example here shows, extracted parts must be specifically freed with curl_free() once the application is done with them. The curl_url_get() can extract all the parts from the handle, by specifying the correct id in the second argument. scheme, user, password, port number and more. One of the “parts” it can extract is a bit special: `CURLUPART_URL`. It returns the full URL back (normalized and using proper syntax). curl_url_get() also has a flags option to allow the application to specify certain behavior. ## Set a URL part /* set a URL part */ curl_url_set(h, CURLUPART_PATH, "/index.html", 0); curl_url_set() lets the user set or update all and any of the individual parts of the URL. curl_url_set() can also update the full URL, which also accepts a relative URL in case an existing one was already set. It will then apply the relative URL onto the former one and “transition” to the new absolute URL. Like this; /* first an absolute URL */ curl_url_set(h, CURLUPART_URL, "https://example.org:88/path/html", 0); /* .. then we set a relative URL "on top" */ curl_url_set(h, CURLUPART_URL, "../new/place", 0); Duplicate a handle It might be convenient to setup a handle once and then make copies of that… CURLU *n = curl_url_dup(h); ## Cleanup the handle When you’re done working with this URL handle, free it and all its related resources. curl_url_cleanup(h); ## Ship? This API is marked as **experimental** for now and ships for the first time in libcurl 7.62.0 (October 31, 2018). I will happily read your feedback and comments on how it works for you, what’s missing and what we should fix to make it even more usable for you and your applications! We call it experimental to reserve the right to modify it slightly going forward if necessary, and as soon as we remove that label the API will then be fixed and stay like that for the foreseeable future. ### See also The URL API section in _Everything curl_.

It is now seven years since we introduced #libcurl's URL API: daniel.haxx.se/blog/2018/09/09/libcurl-...

1 0 1 0
Original post on bofh.social

This little crusade that #DanielStenberg is on trying to name and shame car companies (or any company for that matter) for not buying or paying for support for either #curl or #libcurl is pathetic and passive aggressive.

You chose the license so you can't get all pissy when companies abide by […]

0 0 0 0
Preview
Curl Runs In The World's Top 47 Car Brands [August 2025 Report] - OSTechNix Learn how curl, a small open-source tool, is built into hundreds of millions of cars, including models from the world’s top 47 car brands.

Curl Keeps Cars Rolling – Used by the World’s Top 47 Car Brands #curl #libcurl #automative #car #opensource #commandline
ostechnix.com/curl-runs-in...

3 1 0 0
curl

#curl 8.15.0 has been released ( #libcurl / #Haxx / #DICT / #FILE / #FTP / #FTPS / #Gopher / #HTTP / #HTTPS / #IMAP / #IMAPS / #LDAP / #LDAPS / #MQTT / #POP3 / #RTMP / #RTMPS / #RTSP / #SCP / #SFTP / #SMB / #SMBS / #SMTP / #SMTPS / #Telnet / #TFTP / #WebSocket / #SOCKS5 / #SCRAM / #TLS ) curl.se

0 0 0 0
Post image

Death by a thousand slops I have previously blogged about the relatively new trend of AI slop in vulnerability reports submitted to curl and how it hurts and exhausts us. This trend does not seem t...

#cURL #and #libcurl #AI #bug #bounty #Security

Origin | Interest | Match

0 0 0 0
curl

#curl 8.14.1 has been released ( #libcurl / #Haxx / #DICT / #FILE / #FTP / #FTPS / #Gopher / #HTTP / #HTTPS / #IMAP / #IMAPS / #LDAP / #LDAPS / #MQTT / #POP3 / #RTMP / #RTMPS / #RTSP / #SCP / #SFTP / #SMB / #SMBS / #SMTP / #SMTPS / #Telnet / #TFTP / #WebSocket / #SOCKS5 / #SCRAM / #TLS ) curl.se

0 0 0 0
curl

#curl 8.14.0 has been released ( #libcurl / #Haxx / #DICT / #FILE / #FTP / #FTPS / #Gopher / #HTTP / #HTTPS / #IMAP / #IMAPS / #LDAP / #LDAPS / #MQTT / #POP3 / #RTMP / #RTMPS / #RTSP / #SCP / #SFTP / #SMB / #SMBS / #SMTP / #SMTPS / #Telnet / #TFTP / #WebSocket / #SOCKS5 / #SCRAM / #TLS ) curl.se

0 0 0 0
Post image

Decomplexification (Clearly a much better word than simplification.) I believe we generally accep...

daniel.haxx.se/blog/2025/05/29/decomple...

#cURL #and #libcurl

Result Details

0 0 0 0
Preview
The curl user survey 2025 is up Yes! curl user survey 2025 The time has come for you to once again do your curl community duty. Run over and fill in the curl user survey and tell us about how you use curl etc. This is the only proper way we get user feedback on a wide scale so please use this opportunity to tell us what you really think. This is the 12th time the survey runs. It is generally similar to last year’s but with some details updated and refreshed. The survey stays up for fourteen days. Tell your friends. curl user survey 2025

The curl user survey 2025 is up Yes! curl user survey 2025 The time has come for you to once agai...

daniel.haxx.se/blog/2025/05/19/the-curl...

#cURL #and #libcurl #survey

Result Details

0 0 0 0
Post image

#curlup 2025 brought developers, open-source enthusiasts, and industry leaders together in Prague for insightful sessions and great discussions. Special thanks to @apify for sponsoring!

Read the full recap here 👉 www.wolfssl.com/curl...
#curl #libcurl @bagder.mastodon.social.ap.brid.gy
1/3

1 0 1 0