In case you do not know what you can do with DTrace, take the time to have a look at the DTrace blog. It is worth any minute you invest reading it.
There is a huge discussion going on on hackers@ about how FreeBSD is not suitable for large installations (anymore?). As of this writing, the discussion seems to get some discussion-clusters. We have some sub-topics which could lead to some good improvements.
One subtopic is the release engineering. Some changes like a more guided approach of what should be merged to which branch, the frequency of releases and maybe some kind of long-term-branch(es). There is some discussion to get maybe some joined-funding in some way from interested parties to pay someone to take care about long-term-branch(es).
Another subtopic is the way bugs are handled in our old bugtracking software and how patches go unnoticed there.
And both of them are connected (parts more, parts less) by what can be done in a volunteer project.
To me it looks like the proposals “just” need some refinements and some “volunteers” to put value (this means man power and/or money) to what they said.
What I want to discuss here is, how tools could help with making PRs/patches more visible to developers (there is already the possibility to get emails from the small bugbuster-team about patches in PR database, but you have to ask them to get them) and how to make it more easy to get patches into FreeBSD.
Making bugs more visible to developers
The obvious first: We need a different bugtracking system. We already know about it. There is (or was…) even someone working IIRC on an evaluation of what could be done and how easy/hard it would be. I am not aware of any outcome, despite the fact that it is months (or even a year) since this was announced. I do not blame anyone here, I would like to get time to finish some FreeBSD volunteer work myself.
In my opinion this needs to be handled in a commercial way. Someone needs to be officially paid (with a deadline) to produce a result. Unfortunately there is the problem that the requirements are in a way, that people do not have to change their workflows/procedures.
IIRC people ask that they should be able to send a mail to the bugtracker without the need for authentication. Personally I think the bugtracking issue is in a state where we need to change our workflows/procedures. It is convenient to get mails from the bugtracker and only have to reply to the mail to add something. On the other hand, if I report bugs somewhere, and if I really care about the problem resolution, I am willing login to whatever interface to get this damn problem solved.
Sending a problem report from the system where I have the issue in an easy way is a very useful feature. Currently we have send-pr for this and it uses emails. This means it requires a working email setup. As an user I do not care if the tool uses email or HTTP or HTTPS, I just want to have an easy way to submit the problem. I would not mind if I first have to do a “send-problem register me@tld” (once), “send-problem login me@tld” (once per system+user I want to send from) and then maybe a “send-problem template write_template_here.txt” (to get some template text to fill out), edit the template file and then run “send-problem send my_report.txt file1 file2 …”. That would be a different workflow, but still easy.
Email notifications are surely needed, but if I really care about a problem, I can be bothered to register first. So in my opinion, we need a different bugtracker desperately enough that we need to drop our requirements regarding our current workflow/procedures (even if it means we can not get a command line way of submitting bugs at all). The primary goal of the software needs to be to make it easy to track and resolve bugs. The submission of bugs shall be not hard too. If I look at the state of the world as it is ATM, I would say a webinterface with authentication is not a big burden to take if I really want to get my problem fixed. Some command line tool would be nice to have, but regarding the current state of our bugtracker it needs to be optional instead of a hard requirement.
Apart from making it easy to track and resolve problems, the software also needs to be able to make us aware of the biggest problems. Now… you may ask what is a big problem. Well… IMO it does not matter to you what I think is big or small here. The person with a problem needs to decide what is a big problem to him. And people with the same problem need to be able to tell that it is also a big problem for them. So a feature which allows to “vote” or “+1” or “AOL” (or however you want to call it) would allow to let users with problems voice their opinion upon the relevance of the problem to our userbase. This also means there needs to be a way to see the highest voted problems. An automatic mail would be best, but as above this is optional. If I as a developer really care about this, I can be bothered to login to a webinterface (or maybe someone volunteers to make a copy & paste and send a mail… we need to be willing to rethink our procedures).
Getting patches more easy into a FreeBSD branch
It looks to me that this topic is requires a little bit more involvement from multiple tools. In my opinion we need to switch to a distributed version control system. One which allows to easily create my own branch of FreeBSD on my own hardware, and which allows to let other users use my branch easily (if I want to allow other to branch from my branch). It also needs to be able to let me push my changes towards FreeBSD. Obviously not directly into the official sources, but into some kind of staging area. Other people should be able to have a look at this staging area and be able to review what I did. They need to be able to make some comments for others to see, or give some kind of (multi-dimensional?-)rating for the patch (code quality / works for me / does not work / …). Based upon the review/rating and maybe some automated evaluation (compile test / regression test / benchmark run) a committer could push the patch into the official FreeBSD tree (ideal would be some automated notification system, a push button solution for integration and so on, but as above we should not be afraid if we do not get all the bells and whistles).
If we would have something like this in place, creating some kind of long-term-release branch could be used more easily in a colaborative manner. Companies which use the same long-term-release branch could submit their backports of fixes/features this way. They also could see if similar branches (there could be related but different branches, like 9.4–security-fixes-only <= 9.4-official-errata-only <= 9.4-bugfixes <= 9.4-bugfixes-and-driverupdates <= …) could be merged to their in-house branch (and maybe consequently push-back to the official branch they branched from if the patch comes from a different branch).
It does not matter here if we would create a fixed set of branches for each release, or if we only create some special-purpose branches based upon the phase of the moon (ideally we would create a lot of branches for every release, companies/users can cherry pick/submit what they want, and the status of a long-term-branch is solely based upon the inflow of patches and not by what the security team or release manager or a random developer thinks it should be… but the reality will probably be somewhere in the middle).
I do not know if tools exists to make all this happen, or which tools could be put together to make it happen. I also did not mention on purpose tools I am aware of which already provide (small) parts of this. These are just some ideas to think about. Interested parties are invited to join the discussion on hackers@ (which is far away from discussing specific tools or features), but you are also free to add some comments here.
The recent Phoronix benchmark which compared a release candidate of FreeBSD 9 with Oracle Linux Server 6.1 created a huge discussion in the FreeBSD mailinglists. The reason was that some people think the numbers presented there give a wrong picture of FreeBSD. Partly because not all benchmark numbers are presented in the most prominent page (as linked above), but only at a different place. This gives the impression that FreeBSD is inferior in this benchmark while it just puts the focus (for a reason, according to some people) on a different part of the benchmark (to be more specific, blogbench is doing disk reads and writes in parallel, FreeBSD gives higher priority to writes than to reads, FreeBSD 9 outperforms OLS 6.1 in the writes while OLS 6.1 shines with the reads, and only the reads are presented on the first page). Other complaints are that it is told that the default install was used (in this case UFS as the FS), when it was not (ZFS as the FS).
The author of the Phoronix article participated in parts of the discussion and asked for specific improvement suggestions. A FreeBSD committer seems to be already working to get some issues resolved. What I do not like personally, is that the article is not updated with a remark that some things presented do not reflect the reality and a retest is necessary.
As there was much talk in the thread but not much obvious activity from our side to resolve some issues, I started to improve the FreeBSD wiki page about benchmarking so that we are able to point to it in case someone wants to benchmark FreeBSD. Others already chimed in and improved some things too. It is far from perfect, some more eyes – and more importantly some more fingers which add content – are needed. Please go to the wiki page and try to help out (if you are afraid to write something in the wiki, please at least tell your suggestions on a FreeBSD mailinglist so that others can improve the wiki page).
What we need too, is a wiki page about FreeBSD tuning (a first step would be to take the man-page and convert it into a wiki page, then to improve it, and then to feed back the changes to the man-page while keeping the wiki page to be able to cross reference parts from the benchmarking page).
I already told about this in the thread about the Phoronix benchmark: everyone is welcome to improve the situation. Do not talk, write something. No matter if it is an improvement to the benchmarking page, tuning advise, or a tool which inspects the system and suggests some tuning. If you want to help in the wiki, create a FirstnameLastname account and ask a FreeBSD comitter for write access.
A while ago (IIRC we have to think in months or even years) there was some framework for automatic FreeBSD benchmarking. Unfortunately the author run out of time. The framework was able to install a FreeBSD system on a machine, run some specified benchmark (not much benchmarks where integrated), and then install another FreeBSD version to run the same benchmark, or to reinstall the same version to run another benchmark. IIRC there was also some DB behind which collected the results and maybe there was even some way to compare them. It would be nice if someone could get some time to talk with the author to get the framework and set it up somewhere, so that we have a controlled environment where we can do our own benchmarks in an automatic and repeatable fashion with several FreeBSD versions.
I committed the v4l2 support into the linuxulator (in 9–current). Part of this was the import of the v4l2 header from linux. We have the permission to use it (like the v4l one), it is not licensed via GPL. This means we can use it in FreeBSD native drivers, and they are even allowed to be compiled into GENERIC (but I doubt we have a driver which could provide the v4l2 interface in GENERIC).
Thanks to nox@ for writing the glue code.
After a quick discussion with nox@ I made a copy&paste of his “VDR is committed now”-mail into the FreeBSD wiki. I also re-styled some small parts of it to fit better into the wiki. It is not perfect, but already usable. Now interested people can go and improve the docs there.
Thanks to Juergen for all his work in this area!
Today I stumbled again over some HeatMaps from Brendan Gregg (of DTrace-fame). This time it was the PDF of his presentation at the LISA 2010 conference. It shows nicely how he plans to evolve it from a single-machine (like in Analytics for Oracle Storage products) to the cloud. It is a very good overview about what kind of intuitive performance visualization you can do with this.
There are just too much nice and interesting things out there, and not enough time for all of them.
Brendan Gregg of Sun Oracle fame made a good explanation how to visualize latency to get a better understanding of what is going on (and as such about how to solve bottlenecks). I have seen all this already in various posts in his blog and in the Analytics package in an OpenStorage presentation, but the ACM article summarizes it very good.
Unfortunately Analytics is AFAIK not available in OpenSolaris, so we can not go out and adapt it for FreeBSD (which would probably require to port/implement some additional dtrace stuff/probes). I am sure something like this would be very interesting to all those companies which use FreeBSD in an appliance (regardless if it is a storage appliance like NetApp, or a network appliance like a Cisco/Juniper router, or anything else which has to perform good).