As told before, I have the ICS as provided by Samsung (to it is not the stock ICS, it is the Samsung-version of ICS) on the Galaxy Tab 10.1. Now that it is nearly two months of using it, I want to share a little bit of my experience.
Basically it just works. If you know ICS already, you more or less know how it is on the Galaxy Tab 10.1.
The calendar app is different, it is the Samsung app, not the native ICS one. I have a problem to sync it with the exchange-connector as it is in Horde 4, but I did not take the time to investigate the issue. My Nexus S connects just fine, so it must be some modification by Samsung which is causing the issue.
Sometimes the tablet hangs, and I have to shutdown by pressing the power button for some seconds. This only happens when connected via WLAN. When I start the tablet again, it will hang again if I am not fast enough to enter the PIN of the SIM, unlock the screen and to deactivate the WLAN. But even then it will hang after the deactivation of the WLAN. After rebooting the second time (with WLAN already deactivated), everything works again.
The email app is also stuttering sometimes. This happens when I open a folder with a lot of emails and the email app is trying to determine if there are attachments or not. Either the app is not multi-threaded, or it is not well done.
Apart from that it just works.
As previously reported, I tried the update to Android 3.2 on my Tab and was not happy about the new EMail app. At the weekend I had a little bit of time, so I tried to get the Email.apk from Android 3.1 into Android 3.2.
Long story short, I failed.
TitaniumBackup PRO was restoring or hours (the option to migrate from a different ROM version was enabled) until I killed the app, and it did not get anywhere (I just emailed their support if I did something completely stupid, or of this is a bug in TB). And a copy by hand into /system/apps did not work (app fails to start).
A while ago I committed the linuxulator D-Trace probes I talked about earlier. I waited a little bit for this announcement to make sure I have not broken anything. Nobody complained so far, so I assume nothing obviously bad crept in.
The >500 probes I committed do not cover the entire linuxulator, but are a good start. Adding new ones is straight forward, if someone is interested in a junior–kernel-hacker task, this would be one. Just ask me (or ask on emulation@), and I can guide you through it.
I have the habit to chmod with the relative notation (e.g. g+w or a+r or go-w or similar) instead of the absolute one (e.g. 0640 or u=rw,g=r,o=). Recently I had to chmod a lot of files. As usual I was using the relative notation. With a lot of files, this took a lot of time. Time was not really an issue, so I did not stop it to restart with a better performing command (e.g. find /path -type f -print0 | xargs -0 chmod 0644; find /path -type d -print0 | xargs -0 chmod 0755), but I thought a little tips&tricks posting may be in order, as not everyone knows the difference.
The relative notation
When you specify g+w, it means to remove the write access for the group, but keep everything else like it is. Naturally this means that chmod first has to lookup the current access rights. So for each async write request, there has to be a read-request first.
The absolute notation
The absolute notation is what most people are used to (at least the numeric one). It does not need to read the access rights before changing them, so there is less I/O to be done to get what you want. The drawback is that it is not so nice for recursive changes. You do not want to have the x-bit for data files, but you need it for directories. If you only have a tree with data files where you want to have an uniform access, the example above via find is probably faster (for sure if the directory meta-data is still in RAM).
If you have a mix of binaries and data, it is a little bit more tricky to come up with a way which is faster. If the data has a name-pattern, you could use it in the find.
And if you have a non-uniform access for the group bits and want to make sure the owner has write access to everything, it may be faster to use the relative notation than to find a replacement command-sequence with the absolute notation.
My wife decided that we need a camcorder. As I am a good husband, I do not complain (she pays 😀 ).
There was an offer in a super market nearby. Not as low as you can find in the internet, but if there is a problem, it is much more easy to complain. For something like this, I/we prefer this and am-are OK to spend a little bit more money for this convenience.
This camcorder is recording to SDHC cards. Such cards have a speed rating, and you need to take some min–speed one, to be able to record videos with a camcorder. Unfortunately Samsung does not list the speed rating somewhere. I searched on the Samsung site in the specifications and in the FAQ. Nothing. After a little bit of googling I at least found a review where the recording time for specific card-sizes where listed.
So I took the card-size in MB, divided it by the recording time in seconds, and got the data transfer rate per second for the specified resolutions. The 1080i resolution has the highest transfer rate and as such it is the most interesting one to decide what kind of card one needs.
The highest transfer rate seems to be less than 2.2 MB/s, so a class 4 SDHC card should be enough.
I noticed that we do not have some automatic way of scrubbing a ZFS pool periodically. A quick poll on fs@ revealed, that there is interest in something like this. So I took a little bit of time to write a periodic daily script which checks if the last scrub is X days ago and scrubs a pool accordingly. The script has options to scrub all pools, or just a specific subset. It also allows to specify a time–interval between scrubs for each pool with different levels of fall-back (if no pool-specific interval is set, the default interval is used, which is set to 30 days if no other default interval is specified).
The discussion about this is happening over at fs@, so go there and have a look for the CFT (with a link to the WIP of the script) and the discussion if you are interested.
So far there are some minor details to sort out (and a little bit of documentation to write) before I can commit it… probably next week.