cbloom.com

Go to the new cbloom rants @ blogspot


05-25-15 | The Anti-Patent Patent Pool

The idea of the Anti-Patent Patent Pool is to destroy the system using the system.

The Anti-Patent Patent Pool is an independent patent licensing organization. (Hence APPP)

One option would be to just allow anyone to use those patents free of charge.

A more aggressive option would be a viral licensing model. (like the GPL, which has completely failed, so hey, maybe not). The idea of the viral licensing model is like this :

Anyone who owns no patents may use any patent in the APPP for free (if you currently own patents, you may donate them to the APPP).

If you wish to own patents, then you must pay a fee to license from the APPP. That fee is used to fund the APPP's activities, the most expensive being legal defense of its own patents, and legal attacks on other patents that it deems to be illegal or too broad.

(* = we'd have to be aggressive about going after companies that make a subsidiary to use APPP patents while still owning patents in the parent corporation)

The tipping point for the APPP would be to get a few patents that are important enough that major players need to either join the APPP (donate all their patents) or pay a large license.

The APPP provides a way for people who want their work to be free to ensure that it is free. In the current system this is hard to do without owning a patent, and owning a patent and enforcing it is hard to do without money.

The APPP pro-actively watches all patent submissions and objects to ones that cover prior art, are obvious and trivial, or excessively broad. It greatly reduces the issuance of junk patents, and fights ones that are mistakenly issued. (the APPP maintains a public list of patents that it believes to be junk, which it will help you fight if you choose to use the covered algorithms). (Obviously some of these activities have to be phased in over time as the APPP gets more money).

The APPP provides a way for small companies and individuals that cannot afford the lawyers to defend their work to be protected. When some evil behemoth tries to stop you from using algorithms that you believe you have a legal right to, rather than fight it yourself, you simply donate your work to the APPP and they fight for you.

Anyone who simply wants to ensure that they can use their own inventions could use the APPP.

Once the APPP has enough money, we would employ a staff of patent writers. They would take idea donations from the groundswell of developers, open-source coders, hobbyists. Describe your idea, the patent writer would make it all formal and go through the whole process. This would let us tap into where the ideas are really happening, all the millions of coders that don't have the time or money to pursue getting patents on their own.

In the current system, if you just want to keep your idea free, you have to constantly keep an eye on all patent submissions to make sure noone is slipping in and patenting it. It's ridiculous. Really the only safe thing to do is to go ahead and patent it yourself and then donate it to the APPP. (the problem is if you let them get the patent, even if it's bogus it may be expensive to fight, and what's worse is it creates a situation where your idea has a nasty asterisk on it - oh, there's this patent that covers this idea, but we believe that patent to be invalid so we claim this idea is still public domain. That's a nasty situation that will scare off lots of users.)

Some previous posts :

cbloom rants 02-10-09 - How to fight patents
cbloom rants 12-07-10 - Patents
cbloom rants 04-27-11 - Things we need
cbloom rants 05-19-11 - Nathan Myhrvold


Some notes :

1. I am not interested in debating whether patents are good or not. I am interested in providing a mechanism for those of us who hate patents to pursue our software and algorithm development in a reasonable way.

2. If you are thinking about the patent or not argument, I encourage you to think not of some ideal theoretical argument, but rather the realities of the situation. I see this on both sides of the fence; those who are pro-patent because it "protects inventors" but choose to ignore the reality of the ridiculous patent system, and those on the anti-patent side who believe patents are evil and they won't touch them, even though that may be the best way to keep free ideas free.

3. I believe part of the problem with the anti-patent movement is that we are all too fixated on details of our idealism. Everybody has slightly different ideas of how it should be, so the movement fractures and can't agree on a unified thrust. We need to compromise. We need to coordinate. We need to just settle on something that is a reasonable solution; perhaps not the ideal that you would want, but some change is better than no change. (of course the other part of the problem is we are mostly selfish and lazy)

4. Basically I think that something like the "defensive patent license" is a good idea as a way to make sure your own inventions stay free. It's the safest way (as opposed to not patenting), and in the long run it's the least work and maintenance. Instead of constantly fighting and keeping aware of attempts to patent your idea, you just patent it yourself, do the work up front and then know it's safe long term. But it doesn't go far enough. Once you have that patent you can use it as a wedge to open up more ideas that should be free. That patent is leverage, against all the other evil. That's where the APPP comes in. Just making your one idea free is not enough, because on the other side there is massive machinery that's constantly trying to patent every trivial idea they can think of.

5. What we need is for the APPP to get enough money so that it can be stuffing a deluge of trivial patents down the patent office's throat, to head off all the crap coming from "Intellectual Ventures" and its many brothers. We need to be getting at least as many patents as them and making them all free under the APPP.


Some links :

en.swpat.org - The Software Patents Wiki
Patent Absurdity How software patents broke the system
Home defensivepatentlicense
FOSS Patents U.S. patent reform movement lacks strategic leadership, fails to leverage the Internet
PUBPAT Home


05-21-15 | LZ-Sub

LZ-Sub decoder :

delta_literal = get_sub_literal();

if ( delta_literal != 0 )
{
    *ptr++ = delta_literal + ptr[-lastOffset];
}
else // delta_literal == 0
{
    if ( ! get_offset_flag() )
    {
        *ptr++ = ptr[-lastOffset];
    }
    else if ( get_lastoffset_flag() )
    {
        int lo_index = get_lo_index();
        lastOffset = last_offsets[lo_index];
        // do MTF or whatever using lo_index
        
        *ptr++ = ptr[-lastOffset];
        // extra 0 delta literal implied :
        *ptr++ = ptr[-lastOffset];
    }
    else
    {
        lastOffset = get_offset();
        // put offset in last_offsets set
        
        *ptr++ = ptr[-lastOffset];
        *ptr++ = ptr[-lastOffset];
        // some automatic zero deltas follow for larger offsets
        if ( lastOffset > 128 )
        {
            *ptr++ = ptr[-lastOffset];
            if ( lastOffset > 16384 )
            {
                *ptr++ = ptr[-lastOffset];
            }
        }   
    }

    // each single zero is followed by a zero runlen
    //  (this is just a speed optimization)
    int zrl = get_zero_runlen();
    while(zrl--)
        *ptr++ = ptr[-lastOffset];
}

This is basically LZMA. (sub literals instead of bitwise-LAM, but structurally the same) (also I've reversed the implied structure here; zero delta -> offset flag here, whereas in normal LZ you do offset flag -> zero delta)

This is what a modern LZ is. You're sending deltas from the prediction. The prediction is the source of the match. In the "match" range, the delta is zero.

The thing about modern LZ's (LZMA, etc.) is that the literals-after-match (LAMs) are very important too. These are the deltas after the zero run range. You can't really think of the match as just applying to the zero-run range. It applies until you send the next offset.

You can also of course do a simpler & more general variant :

Generalized-LZ-Sub decoder :


if ( get_offset_flag() )
{
    // also lastoffset LRU and so on not shown here
    lastOffset = get_offset();
}

delta_literal = get_sub_literal();

*ptr++ = delta_literal + ptr[-lastOffset];

Generalized-LZ-Sub just sends deltas from prediction. Matches are a bunch of zeros. I've removed the acceleration of sending zero's as a runlen for simplicity, but you could still do that.

The main difference is that you can send offsets anywhere, not just at certain spots where there are a bunch of zero deltas generated (aka "min match lengths").

This could be useful. For example when coding images/video/sound , there is often not an exact match that gives you a bunch of exact zero deltas, but there might be a very good match that gives you a bunch of small deltas. It would be worth sending that offset to get the small deltas, but normal LZ can't do it.

Generalized-LZ-Sub could also give you literal-before-match. That is, instead of sending the offset at the run of zero deltas, you could send it slightly *before* that, where the deltas are not zero but are small.

(when compressing text, "sub" should be replaced with some kind of smart lexicographical distance; for each character precompute a list of its most likely substitution character in order of probability.)

LZ is a bit like a BWT, but instead of the contexts being inferred by the prefix sort, you transmit them explicitly by sending offsets to prior strings. Weird.


05-21-15 | Umm

I sent a lawyer an email yesterday.

Today they sent me back an email saying :

"I need your email address so I can send you the documents you need to sign"

Umm... you are not inspiring great confidence in your abilities.

Also, pursuant to my last post about spam - pretty much all my correspondence with lawyers over the past few months, Google decides to put in the spam folder. I keep thinking "WTF why didn't this lawyer get back to me - oh crap, go check the spam". Now, I'm totally down with the comic social commentary that Google is making ("ha ha, all email from lawyers is spam, amirite? lol"). But WTF your algorithms are insanely broken. I mean, fucking seriously you suck so bad.


05-21-15 | Software Patents are Fucking Awesome

Awesome. Spotted on encode.ru. It was inevitable I suppose :

"System and method for compressing data using asymmetric numeral systems with probability distributions"

By these tards :

Storleap

Someone in the UK go over and punch them in the balls.

For those not aware of the background, ANS is probably the biggest invention in data compression in the last 20 years. Its inventor (Jarek Duda) has explicitly tried to publish it openly and make it patent-free, because he's awesome.

In the next 10 years I'm sure we will get patents for "using ANS with string-matching data compression", "using ANS with block mocomp data compression", "using ANS as a replacement for Huffman coding", "deferred summation with ANS", etc. etc. Lots of brilliant inventions like that. Really stimulating for innovation.

(as has happened over and over in data compression, and software in general in the past; hey let's take two obvious previously existing things; LZ string matching + Huffman = patent. LZ + hash table = patent. JPEG + arithmetic = patent. Mocomp + Huffman = patent. etc. etc.)

(often glossed over in the famous Stac-Microsoft suit story is the question of WHAT THE FUCK the LZS patent was supposed to be for? What was the invention there exactly? Doing LZ with a certain fixed bit encoding? Umm, yeah, like everyone does?)

Our patent system is working great. It obviously protects and motivates the real inventors, and doesn't just act as a way for the richest companies to lock in semi-monopolies of technologies they didn't even invent. Nope.

Recently at RAD we've made a few innovations related to ANS that are mostly in the vein of small improvements or clever usages, things that I wouldn't even imagine to patent, but of course that's wrong.

I've also noticed in general a lot of these vaporware companies in the UK. We saw one at RAD a few years ago that claimed to use "multi-dimensional curve interpolation for data compression" or some crackpot nonsense. There was another one that used alternate numeral systems (not ANS, but p-adic or some such) for god knows what. A few years ago there were lots of fractal-image-compression and other fractal-nonsense startups that did ... nothing. (this was before the VC "pivot" ; hey we have a bunch of fractal image patents, let's make a text messaging app)

They generally get some PhD's from Cambridge or whatever to be founders. They bring a bunch of "industry luminaries" on the board. They patent a bunch of nonsense. And then ...

... profit? There's a step missing where they actually ever make anything that works. But I guess sometimes they get bought for their vapor, or they manage to get a bullshit patent that's overly-general on something they didn't actually invent, and then they're golden.

I wonder if these places are getting college-backed "incubation" incentives? Pretty fucking gross up and down and all around. Everyone involved is scum.

(In general, universities getting patents and incubating startups is fucking disgusting. You take public funding and student's tuition, and you use that to lock up ideas for private profit. Fucking rotten, you scum.)


On a more practical note, if anyone knows the process for objecting to a patent in the UK, chime in.

Also, shame on us all for not doing more to fight the system. All our work should be going in the Anti-Patent Patent Pool.


Under the current first-to-file systems, apparently we are supposed to sit around all day reading every patent that's been filed to see if it covers something that we have already invented or is "well known" / public domain / prior art.

It's really a system that's designed around patents. It assumes that all inventions are patented. It doesn't really work well with a prior invention that's just not patented.

Which makes something like the APPP even more important. We need a way to patent all the free ideas just as a way to keep them legally free and not have to worry about all the fuckers who will rush in and try to patent our inventions as soon as we stop looking.


05-17-15 | The Google Spam Filter is Intentionally Bad

I'm convinced at this point that Google intentionally filters spam wrong.

Not in a nefarious way, like haha we're going to send your good mails to "spam" and let the crap through! Take that!

But actually in a sort of more deeply evil way. A capitalist way. They specifically *want* to allow through mass-mailings from corporations that are they do not consider spam.

In my opinion, those are all spam. There is not a single corporate mass-mailing that I ever intentionally subscribed to.

Basically there's a very very easy spam filtering problem :

Easy 1. Reject all mass-mailings. Reject all mailings about sales, products, offers. Reject all mailings about porn or penises or nigerian princes.

Easy 2. Allow through all mail that's hand-written by a human to me. Particularly to one that I have written to in the past.

That would be fine with me. That would get 99.99% of it right for me.

They don't want to solve that problem. Instead they try to solve the much-harder problem of allowing through viagra offers that are for some reason not spam. For the email user who *wants* to get mass-mail offers of 50% off your next order.

I just don't understand how "yeah, let's go out to dinner" from my friend, who is responding with quote to a fucking mail that I sent goes in the in the Spam box, but "get direct email mass-marketing secrets to double your business!" goes in my inbox. How can it be so bad, I just really don't understand it. Fucking the most basic keyword include/exclude type of filter could do better.

I should have just written my own, because it's the kind of problem that you want to be constantly tweaking on. Every time a mail is misclassified, I want to run it through my system and see why that happened and then try to fix it.

It would be SOOO fucking easy for them. Being in a position as a central mail processor, they can tell which mails are unique and which are mass-sent, and just FUCKING BLOCK ALL THE MASS-SENT MAIL. God dammit. You are fucking me up and I know you're doing it intentionally. I hate you.


I mean, fuck. It's ridiculous.

They are responding to a mail I sent. The mail I sent is fucking quoted right there. I sent the fucking mail from gmail so you can confirm it's for real. I sent to their address with gmail. AND YOU PUT THEIR REPLY IN SPAM. WTF WTF WTF

But this is not spam :

Report: creative teamwork is easier with cloud-based apps

Businesses Increase Screening of Facebook, Twitter Before Hiring

Trying to solve the Prospecting Paradox?

I'd like to add you to my professional network on LinkedIn


Maybe I'm being a bit overly simplistic and harsh. Maybe there are mass-mailings that look spammish, but you actually want to get? Like, your credit card bill is due?

I'm not sure. I'm not sure that I ever need to get any of that. I don't need those "shipping confirmation" emails from Amazon. If they just all got filed to the "mass mail" folder, I could go look for them when I need them.


I want to make my own private internet. And then not allow anyone else to use it because you'd all just fuck it up.


05-16-15 | Threading Primitive : monitored semaphore

A monitored semaphore allows two-sided waiting :

The consumer side decs the semaphore, and waits on the count being positive.

The producer side incs the semaphore, and can wait on the count being a certain negative value (some number of waiting consumers).

Monitored semaphore solves a specific common problem :

In a worker thread system, you may need to wait on all work being done. This is hard to do in a race-free way using normal primitives. Typical ad-hoc solutions may miss work that is pushed during the wait-for-all-done phase. This is hard to enforce, ugly, and makes bugs. (it's particularly bad when work items may spawn new work items).

I've heard of many ad-hoc hacky ways of dealing with this. There's no need to muck around with that, because there's a simple and efficient way to just get it right.

The monitored semaphore also provides a race-free way to snapshot the state of the work system - how many work items are available, how many workers are sleeping. This allows you to wait on the joint condition - all workers are sleeping AND there is no work available. Any check of those two using separate primitives is likely a race.

The implementation is similar to the fastsemaphore I posted before.

"fastsemaphore" wraps some kind of underlying semaphore which actually provides the OS waits. The underlying semaphore is only used when the count goes negative. When count is positive, pops are done with simple atomic ops to avoid OS calls. eg. we only do an OS call when there's a possibility it will put our thread to sleep or wake a thread.

"fastsemaphore_monitored" uses the same kind atomic variable wrapping an underlying semaphore, but adds an eventcount for the waiter side to be triggered when enough workers are waiting. (see who ordered event count? )

Usage is like this :


To push a work item :

push item on your queue (MPMC FIFO or whatever)
fastsemaphore_monitored.post();

To pop a work item :

fastsemaphore_monitored.wait();
pop item from queue

To flush all work :

fastsemaphore_monitored.wait_for_waiters(num_worker_threads);

NOTE : in my implementation, post & wait can be called from any thread, but wait_for_waiters must be called from only one thread. This assumes you either have a "main thread" that does that wait, or that you wrap that call with a mutex.

template <typename t_base_sem>
class fastsemaphore_monitored
{
    atomic<S32> m_state;
    eventcount m_waiters_ec;
    t_base_sem m_sem;

    enum { FSM_COUNT_SHIFT = 8 };
    enum { FSM_COUNT_MASK = 0xFFFFFF00UL };
    enum { FSM_COUNT_MAX = ((U32)FSM_COUNT_MASK>>FSM_COUNT_SHIFT) };
    enum { FSM_WAIT_FOR_SHIFT = 0 };
    enum { FSM_WAIT_FOR_MASK = 0xFF };
    enum { FSM_WAIT_FOR_MAX = (FSM_WAIT_FOR_MASK>>FSM_WAIT_FOR_SHIFT) };

public:
    fastsemaphore_monitored(S32 count = 0)
    :   m_state(count<<FSM_COUNT_SHIFT)
    {
        RL_ASSERT(count >= 0);
    }

    ~fastsemaphore_monitored()
    {
    }

public:

    inline S32 state_fetch_add_count(S32 inc)
    {
        S32 prev = m_state($).fetch_add(inc<<FSM_COUNT_SHIFT,mo_acq_rel);
        S32 count = ( prev >> FSM_COUNT_SHIFT );
        RR_ASSERT( count < 0 || ( (U32)count < (FSM_COUNT_MAX-2) ) );
        return count;
    }

    // warning : wait_for_waiters can only be called from one thread!
    void wait_for_waiters(S32 wait_for_count)
    {
        RL_ASSERT( wait_for_count > 0 && wait_for_count < FSM_WAIT_FOR_MAX );
        
        S32 state = m_state($).load(mo_acquire);
        
        for(;;)
        {
            S32 cur_count = state >> FSM_COUNT_SHIFT;

            if ( (-cur_count) == wait_for_count )
                break; // got it
        
            S32 new_state = (cur_count<<FSM_COUNT_SHIFT) | (wait_for_count << FSM_WAIT_FOR_SHIFT);
            
            S32 ec = m_waiters_ec.prepare_wait();
            
            // double check and signal what we're waiting for :
            if ( ! m_state.compare_exchange_strong(state,new_state,mo_acq_rel) )
                continue; // retry ; state was reloaded
            
            m_waiters_ec.wait(ec);
            
            state = m_state($).load(mo_acquire);
        }
        
        // now turn off the mask :
        
        for(;;)
        {
            S32 new_state = state & FSM_COUNT_MASK;
            if ( state == new_state ) return;
        
            if ( m_state.compare_exchange_strong(state,new_state,mo_acq_rel) )
                return; 
                
            // retry ; state was reloaded
        }
    }

    void post()
    {
        if ( state_fetch_add_count(1) < 0 )
        {
            m_sem.post();
        }
    }

    void wait_no_spin()
    {
        S32 prev_state = m_state($).fetch_add((-1)<<FSM_COUNT_SHIFT,mo_acq_rel);
        S32 prev_count = prev_state>>FSM_COUNT_SHIFT;
        if ( prev_count <= 0 )
        {
            S32 waiters = (-prev_count) + 1;
            RR_ASSERT( waiters >= 1 );
            S32 wait_for = prev_state & FSM_WAIT_FOR_MASK;
            if ( waiters == wait_for )
            {
                RR_ASSERT( wait_for >= 1 );
                m_waiters_ec.notify_all();
            }
            
            m_sem.wait();
        }
    }
    
    void post(S32 n)
    {
        RR_ASSERT( n > 0 );
        for(S32 i=0;i<n;i++)
            post();
    }
       
    bool try_wait()
    {
        // see if we can dec count before preparing the wait
        S32 state = m_state($).load(mo_acquire);
        for(;;)
        {
            if ( state < (1<<FSM_COUNT_SHIFT) ) return false;
            // dec count and leave the rest the same :
            //S32 new_state = ((c-1)<<FSM_COUNT_SHIFT) | (state & FSM_WAIT_FOR_MASK);
            S32 new_state = state - (1<<FSM_COUNT_SHIFT);
            RR_ASSERT( (new_state>>FSM_COUNT_SHIFT) >= 0 );
            if ( m_state($).compare_exchange_strong(state,new_state,mo_acq_rel) )
                return true;
            // state was reloaded
            // loop
            // backoff here optional
        }
    }
     
       
    S32 try_wait_all()
    {
        // see if we can dec count before preparing the wait
        S32 state = m_state($).load(mo_acquire);
        for(;;)
        {
            S32 count = state >> FSM_COUNT_SHIFT;
            if ( count <= 0 ) return 0;
            // swap count to zero and leave the rest the same :
            S32 new_state = state & FSM_WAIT_FOR_MASK;
            if ( m_state($).compare_exchange_strong(state,new_state,mo_acq_rel) )
                return count;
            // state was reloaded
            // loop
            // backoff here optional
        }
    }
           
    void wait()
    {
        int spin_count = rrGetSpinCount();
        while(spin_count--)
        {
            if ( try_wait() ) 
                return;
        }
        
        wait_no_spin();
    }

};


05-16-15 | LZ literals after match

Some vague rambling about LAMs.

LAMs are weird.

LAM0 , the first literal after a match, has the strong exclusion property (assuming maximum match lengths). LAM0 is strictly != lolit. (lolit = literal at last offset).

LAM1, the next literal after end of match, has the exact opposite - VERY strong prediction of LAM1 == lolit. This prediction continues but weakens as you go to LAM2, LAM3, etc.

In Oodle LZNA (and in many other coders), I send a flag for (LAM == lolit) as a separate event. That means in the actual literal coding path you still have LAM1 != lolit. (the LAM == lolit flag should be context-coded using the distance from the end of the match).

In all cases, even though you know LAM != lolit, lolit is still a very strong predictor for LAM. Most likely LAM is *similar* to lolit.

LAM is both an exclude AND a predictor!

What similar means depends on the file type. In text it means something like vowels stay vowels, punctuation stays punctuation. lolit -> LAM is sort of like substituting one character change. In binary, it often means that they are numerically close. This means that the delta |LAM - lolit| is never zero, but is often small.

One of the interesting things about the delta is that it gives you a data-adaptive stride for a delta filter.

On some files, you can get huge compression wins by running the right delta filter. But the ideal delta distance is data-dependent (*). The sort of magic thing that works out is that the LZ match offsets will naturally pick up the structure & word sizes. In a file of 32-byte structs made of DWORDs, you'll get offsets of 4,8,12,32,etc. So you then take that offset and forming the LAM sub is just a way of doing a delta with that deduced stride. On DWORD or F32 data, you tend to get a lot of offset=4, so LAM tends to just be doing delta from the previous word (note of course this bytewise delta, not a proper dword delta).

(* = this is a huge thing that someone needs to work on; automatic detection of delta filters for arbitrary data; deltas could be byte,word,dword, other, from immediate neighbors or from struct/row strides, etc. In a compression world where we are fighting over 1% gains, this can be a 10-20% jump.)

Experimentally we have observed that LAMs are very rapidly changing. They benefit greatly from very quickly adapting models. They like geometric adaptation rates (more recent events are much more important). They cannot be modeled with large contexts (without very sophisticated handling of sparsity and fast adaptation), they need small contexts to get lots of events and statistical density. They seem to benefit greatly from modeling in groups (eg. bitwise or nibblewise or other), so that events on one symbol also affect other probabilities for faster group learning. Many of these observations are similar for post-BWT data. LAM sub literals does seem to behave like post-BWT data to some extent, and similar principles of modeling apply.

So, for example, just coding an 8-bit symbol using the 8-bit lolit as context is a no-go. In theory this would give you full modeling of the effects of lolit on the current symbol. In practice it dilutes your statistics way too much. (in theory you could do some kind of one-count boosts other counts thing (or a secondary coding table ala PPMZ SEE), but in practice that's a mess). Also as noted previously, if you have the full 8-bit context, then whether you code symbol raw or xor or sub is irrelevant, but if you do not have the full context then it does change things.

Related posts :

cbloom rants 08-20-10 - Deobfuscating LZMA
cbloom rants 09-14-10 - A small note on structured data
cbloom rants 03-10-13 - Two LZ Notes
cbloom rants 06-12-14 - Some LZMA Notes
cbloom rants 06-16-14 - Rep0 Exclusion in LZMA-like coders
cbloom rants 03-15-15 - LZ Literal Correlation Images


05-13-15 | Skewed Pareto Chart

It's hard to see just the decomp speed in the normal Pareto Chart. It gets squished down over at the far-right Y-intercept.

The obvious fix is just to magnify the right side. This is a linear scaling of the data; *1 on the far left, *10 on the far right :

The far-left is still proportional to the compression ratio, the far right is proportional to the decompression speed. The compressor lines are still speedups vs. memcpy, but the memcpy baseline is now sloped.

I'm not really sure how I feel about the warped chart vs unwarped.

The Pareto curves are in fact sigmoids (tanh's).


speedup = 1 / (1/compression_ratio + disk_speed / decompress_speed)

speedup = 1 / (1/compression_ratio + exp( log_disk_speed ) / decompress_speed)

(here they're warped sigmoids because of the magnification; the ones back here in the LZNA post are true sigmoids)

I believe (but have not proven) that a principle of the Pareto Frontier is that the maximum of all compressors should also be a sigmoid.


max_speedup(disk_speed) = MAX{c}( speedup[compressor c](disk_speed) );

One of the nice things about these charts is it makes it easy to see where some compressors are not as good as possible. If we fit a sigmoid over the top of all the curves :

We can easily see that LZHLW and LZNIB are not touching the curve. They're not as good as they should be in space/speed. Even thought nothing beats them at the moment (that I know of), they are algorithmically short of what's possible.

There are two things that constrain compressors from being better in a space/speed way. There's 1. what is our current best known algorithm. And then there's 2. what is possible given knowledge of all possible algorithms. #2 is the absolute limit and eventually it runs into a thermodynamic limit. In a certain amount of cpu time (cpu bit flips, which increase entropy), how much entropy can you take out of a a given data stream. You can't beat that limit no matter how good your algorithm is. So our goal in compression is always to just find improvements in the algorithms to edge closer to that eventual limit.

Anyway. I think I know how to fix them, and hopefully they'll be up at the gray line soon.


05-11-15 | ANS Minimal Flush

A detail for the record :

ANS (TANS or RANS) in the straightforward implementation writes a large minimum number of bytes.

To be concrete I'll consider a particular extremely bad case : 64-bit RANS with 32-bit renormalization.

The standard coder is :


initialize encoder (at end of stream) :

x = 1<<31

renormalize so x stays in the range x >= (1<<31) and x < (1<<63)

flush encoder (at the beginning of the stream) :

output all 8 bytes of x

decoder initializes by reading 8 bytes of x

decoder renormalizes via :

if ( x < (1<<31) )
{
  x <<= 32;  x |= get32(ptr); ptr += 4;
}

decoder terminates and can assert that x == 1<<31

this coder outputs a minimum of 8 bytes, which means it wastes up to 7 bytes on low-entropy data (assuming 1 byte minimum output and that the 1 byte required to byte-align output is not "waste").

In contrast, it's well known how to do minimal flush of arithmetic coders. When the arithmetic coder reaches the end, it has a "low" and "range" specifying an interval. "low" might be 64-bits, but you don't need to output them all, you only need to output enough such that the decoder will get something in the correct interval between "low" and "low+range".

Historically people often did arithmetic coder minimum flush assuming that the decoder would read zero-valued bytes after EOF. I no longer do that. I prefer to do a minimum flush such that decoder will get something in the correct interval no matter what byte follows EOF. This allows the decoder to just read past the end of your buffer with no extra work. (the arithmetic coder reads some # of bytes past EOF because it reads enough to fill "low" with bits, even though the top bits are all that are needed at the end of the stream).

The arithmetic coder minimum flush outputs a number of bytes proportional to log2(1/range) , which is the number of bits of information that are currently held pending in the arithmetic coder state, which is good. The excess is at most 1 byte.

So, to make ANS as clean as arithmetic coding we need a minimal flush. There are two sources of the waste in the normal ANS procedure outlined above.

One is the initial value of x (at the end of the stream). By setting x to (1<<31) , the low end of the renormalization interval, we have essentually filled it with bits it has to flush. (the pending bits in x is log2(x)). But those bits don't contain anything useful (except a value we can check at the end of decoding). One way to remove that waste is to stuff some other value in the initial state which contains bits you care about. Any value you initialize x with, you get back at the end of decoding, so then those bits aren't "wasted". But this can be annoying to find something useful to put in there, since you don't get that value out until the end of decoding.

The other source of waste is the final flush of x (at the beginning of the stream). This one is obvious - the # of pending bits stored in x at any time is log2(x). Clearly we should be flushing the final value of x in a # of bits proportional to log2(x).

So to do ANS minimal flush, here's one way :


initialize encoder (at end of stream) :

x = 0

renormalize so x stays in the range x < (1<<63)

flush encoder (at the beginning of the stream) :

output # of bytes with bits set in x, and those bytes

decoder initializes by reading variable # of bytes of x

decoder renormalizes via :

if ( x < (1<<31) )
{
  if ( ptr < ptrend )
  {
    x <<= 32;  x |= get32(ptr); ptr += 4;
  }
}

decoder terminates and can assert that x == 0

This ANS variant will output only 1 byte on very-low-entropy data.

There are now two phases of the coder. In the beginning of encoding (at the ending of the stream), x is allowed to be way below the renormalization range. During this phase, encoding just puts information into x, and the value of x grows. (note that x can actually stay 0 and never hold any bits if your consists of entirely the bottom symbol in RANS). Once x grows up into the renormalization interval, you enter the next phase where bits of x are pushed to the output to keep x in the renormalization interval. Decoding, in the first phase you read bytes from the stread to fill x with bits and keep it in the renormalization interval. Once the decoder read pointer hits the end, you switch to the second phase, and now x is allowed to shrink below the renormalization minimum and you can continue to decode the remaining information held in it.

This appears to add an extra branch to the decoder renormalization, but that can be removed by duplicating your decoder into "not near the end" and "near the end" variants.

The #sigbit output of x at the head is just the right thing and should always be done in all variants of ANS.

The checking ptr vs. ptrend and starting x = 0 is the variant that I call "minimal ANS".

Unfortunately "minimal ANS" doesn't play well with the ILP multi-state interleaved ANS. To do interleaved ANS like this you would need an EOF marker for each state. That's possible in theory (and could be done compactly in theory) but is a pain in the butt in practice.


05-10-15 | Did I ever mention that I fucking hate the fucking web?

(I might be a bit cranky today. Too much work and not enough sex. I should probably just go to a bar and talk about how I love Obama and taxes so I can get in a fight. Instead I'll rage about the fucking web.)

I'm trying to get together the photos of my baby to share with my mom. What a fucking nightmare. They're mostly on my phone, and auto-backed up to Google Photos. Should be easy, right?

The Google Photos web interface is fucking wrist-slashing insanity. It's SO FUCKING SLOW. It should not take so long to show me a few little thumbnails. Fucking quit all the fucking AJAX fancy bullshit whatever the fuck you're fucking doing oh my god.

It always only wants to show me "highlights". Who told you to fucking do that? I have never highlighted anything so I'm not sure how you decided what was a highlight and what wasn't. You fucking dicks.

Simple shit like making an album and trying to put the correct photos in the album just has no decent workflow. FUCK.

So I'm going to just download them and do it on my computer. Fine.

There's no download all. I'm supposed to what, click each fucking one and download? (which is a frustrating nightmare because the click is some super slow awful web popup).

Okay, I can use Google Takeouts to just get the whole thing. Hmm. Why are my photos fucking 8 GB? Oh, because it's giving me all my videos too. FUCK FUCK FUCK. I just want the photos not the videos. Nope, Takeouts gives you everything.

Okay, I'll just download the 8 GB. Oh awesome the download the failed. Oh awesome it failed again.

Okay, I'll get the download URL and give it to DownloadThemAll which is good and can do resumes and so on and the main reason I cling to Firefox.

NOPE the fucking download link is not an actual file it's some fucking redirect login bullshit that DTA can't handle. ARG ARG ARG.

And now fuck my fucking baby photos and fuck my mom (sorry mom) I'm not fucking dealing with this shit and I fucking hate the fucking web god dammit.


For some time I've been using Google Classic Maps ("https://maps.google.com/maps?output=classic&dg=opt"). And now it's been killed. Maybe I'll switch to Bing? Or fuck that. Maybe I should just buy a good set of paper maps. I'm not sure that even exists any more. Ever since the Thomas Guide switched to computer-generated maps they really suck, they're ugly and the layout is no good and hard to read.

The reason I saw on the web for killing it (Google Classic Maps) was that too many people were opting out of new maps. You killed it because people liked it. I don't know if that's true, but it is awesomely in character.

For a while I was on the Google Forums complaining about Blogger. Just about everyone who runs a blog at some point gets a troll and realizes that they need the ability to just ban an individual. Can't do it. So they go on the Google product forum and say hey can we get black listing and white listing? The Google response was "we know you want that, and fuck you".

REMINDER TO SELF : always download all the images made by Google Charts because that service will die at some point. (this would be good practice even if Google didn't randomly chop off its own limbs on a regular basis)


I don't keep any cookies or browse history. With everyone going to fucking two-phase login this is starting to get annoying. To login I now have to get a text code to my phone and enter that. It's tedious.

But the thing that really kills me is this stupid detail :

I get the numeric code sent to me. I go to Google Voice on my computer (because actually ever touching the phone is to be avoided at all cost). I double-click the number to copy it. I paste it in the two-phase entry.

It fails. Wrong code.

I try again. It fails.

The fucking double-click is selecting the space after the number, and the fucking login doesn't ignore the trailing space. It's lazy bad programmer shit like that which makes me furious.

Another one I hit often is using online payment thingies. I'll copy-paste the amount from my bill, something like "$1,234" and hit okay and I get

"invalid entry, please enter a numeric value"

IT's A FUCKING PAYMENT ENTRY BOX. You can fucking strip the leading $ and commas you piece of shit mother fucking asshole terrible programmers.


I'm trying to login to Skype on my phone.

(side note: summary of every Skype sessions I've ever had : "I can see you, can you see me? I can't hear you. Oh, you're upside down. Let me log off and disconnect. Now you're black. Try again. It's real glitchy, let's restart it. Hey, it's working! Hi! Hi! Okay, gotta go now.")

It says login with your skype account or your microsoft account.

So I tediously enter my microsoft account login which has a password like fucking @#$ASD@!#$<:22 and is fucking awful to type (and is starred out you fucking fucks the fucking fuck).

Skype says "oh, it looks like you entered a microsoft account, redirecting..."

And it pops up a new login page WITH BLANK FUCKING ENTRIES. I WANT TO STAB YOU IN THE COLON.


I'm so fucking sick of loading web pages and seeing "connecting to blah.. connecting to blah.." and seeing shit popping in slowly and reflowing and the focus popping and all this fucking shit.

Hey, fucking remedial loading school. You put all the content needed for the page in one package. Send me the one package. BOOM it loads.

Incremental is bullshit.

Back in the 90's some time, I worked for Eclipse on streaming 3d for the web. One of the things I did was a progressive wavelet image compressor so we could do things like send the first 5k of each image, then the next 10k, and because of the truncation property of bitplane-coded wavelets those were good low quality versions of the image that could just be tacked together.

So we tried to test it and demo it.

Everything just instantly loaded and you couldn't see the progressive wavelet load at all.

Because if you're not a fucking moron and you package together your content and just have a single download bundle to get your content, hey the internet is actually really fucking fast (even back in the 90's !!).

To show it off I put in a bunch of fake delays on the downloader to simulate slow hosts, so that you could see the wavelets gradually getting better, and that's what we showed to VC's or whatever.

I guess I could have just taken all the files and scattered them on different hosts around the world, THE WAY FUCKING NORMAL WEB PAGES DO. It's like they have very carefully gone through this process of intentionally slowing down the internet for no reason.


Sometimes I wish that I was like an air-cooled Porsche mechanic or something very stable and non-computer related, so I could just work away in my shop and not have to ever touch this fucking demon box.


05-09-15 | Oodle LZNA

Oodle 1.45 has a new compressor called LZNA. (LZ-nibbled-ANS)

LZNA is a high compression LZ (usually a bit more than 7z/LZMA) with better decode speed. Around 2.5X faster to decode than LZMA.

Anyone who needs LZMA-level compression and higher decode speeds should consider LZNA. Currently LZNA requires SSE2 to be fast, so it only runs full speed on modern platforms with x86 chips.

LZNA gets its speed from two primary changes. 1. It uses RANS instead of arithmetic coding. 2. It uses nibble-wise coding instead of bit-wise coding, so it can do 4x fewer coding operations in some cases. The magic sauce that makes these possible is Ryg's realization about mixing cumulative probability distributions . That lets you do the bitwise-style shift update of probabilities (keeping a power of two total), but on larger alphabets.

LZNA usually beats LZMA compression on binary, slightly worse on text. LZNA is closer to LZHAM decompress speeds.


Some results :


lzt99

LZNA -z6 : 24,700,820 -> 9,154,248 =  2.965 bpb =  2.698 to 1
decode only      : 0.327 seconds, 43.75 b/kc, rate= 75.65 mb/s

LZMA : 24,700,820 -> 9,329,925 =  3.021 bpb =  2.647 to 1
decode           : 0.838 seconds, 58.67 clocks, rate= 29.47 M/s

LZHAM : 24,700,820 ->10,140,761 =  3.284 bpb =  2.435 to 1
decode           : 0.264 seconds, 18.44 clocks, rate= 93.74 M/s

(note on settings : LZHAM is run at BETTER because UBER is too slow. LZHAM BETTER is comparable to Oodle's -z6 ; UBER is similar to my -z7. LZMA is run at the best compression setting I can find; -m9 and lc=0,lp=2,pb=2 for binary data; with LZHAM I don't see a way to set the context bits. This is the new LZHAM 1.0, slightly different than my previous tests of LZHAM. All 64-bit, big dictionaries.).


baby_robot_shell

LZNA -z6 : 58,788,904 ->12,933,907 =  1.760 bpb =  4.545 to 1
decode only      : 0.677 seconds, 50.22 b/kc, rate= 86.84 mb/s

LZMA : 58,788,904 ->13,525,659 =  1.840 bpb =  4.346 to 1
decode           : 1.384 seconds, 40.70 clocks, rate= 42.49 M/s

LZHAM : 58,788,904 ->15,594,877 =  2.122 bpb =  3.769 to 1
decode           : 0.582 seconds, 17.12 clocks, rate= 100.97 M/s

I'm not showing encode speeds because they're all running different amounts of threading. It would be complicated to show fairly. LZHAM is the most aggressively threaded, and also the slowest without threading.


My "game testset" total sizes, from most compression to least :


Oodle LZNA -z8 :            57,176,229
Oodle LZNA -z5 :            58,318,469

LZMA -mx9 d26:lc0:lp2:pb3 : 58,884,562
LZMA -mx9 :                 59,987,629

LZHAM -mx9 :                62,621,098

Oodle LZHLW -z6 :           68,199,739

zip -9 :                    88,436,013

raw :                       167,495,105


Here's the new Pareto chart for Oodle. See previous post on these charts

This is load+decomp speedup relative to memcpy : (lzt99)

The left-side Y-intercept is the compression ratio. The right-side Y-intercept is the decompression speed. In between you can see the zones where each compressor is the best tradeoff.

With LZMA and LZHAM : (changed colors)

lzt99 is bad for LZHAM, perhaps because it's heterogeneous and LZHAM assumes pretty stable data. (LZHAM usually beats LZHLW for compression ratio). Here's a different example :

load+decomp speedup relative to memcpy : (baby_robot_shell)


05-08-15 | Bernie Sanders For President

I just watched the Daily Show where Jon just makes fun of Bernie for being unknown and looking/sounding a bit like an older Larry David, without mentioning anything about the fact that Bernie is fucking awesome. Bernie's announcement of "okay let's keep this short, I have to get back to work" is fucking awesome typical Bernie and it's what our politicians should be saying. They should be doing fucking work instead of making elaborate announcement pageants. They should actually be reading and writing laws instead of letting lobbyists and aides do it all for them.

Bernie Sanders is fucking amazing. If you haven't had the great pleasure of hearing him talk at length, go do it now. He is the best politician since I don't even know fucking who (*). I have never in my lifetime heard a single politician that actually speaks honestly and intelligently about the issues. Not talking points. Not just a bunch of bullshit promises. Not just verbal gymnastics to avoid the point. Actually directly talks about the issue in a realistic and pragmatic way.

(* = I have seen video of things like the Nixon-Kennedy debates in which politicians actually talk about issues and try to pin each other down on points of policy, rather than scoring "gotchas" and "applause points". I understand that back in the olden days, pandering to cheap emotional vagaries would get you pilloried in the press as "evasive" or "not serious". I've never seen it in my life.)

Even if you're conservative and don't agree with his views, it should be a fucking breath of fresh air for anyone with any fucking scrap of humanity and decency and intelligence to see a politician that refuses to play the games, refuses to pander to the shitty mass-applause points and the special-interest hate groups and most of all corporate money.

Even though Bernie won't win, I want every single debate to be just Bernie. I don't want to hear a single fucking vapid scripted garbage word from any of the other shit-heads. Just let Bernie talk the whole time.

I disagree with Bernie on some points. He tends to be rather more traditional liberal pro-union pro-manufacturing than I necessarily think is wise. But it's easy to get distracted by the disagreement and miss the point - here is a politician that's actually talking about issues and facts. Even if he gets them wrong sometimes at least he's trying to address real solutions. (this is one of the tricks of most other politicians - don't ever actually propose anything concrete, because then it can be attacked, instead just talk about "hope" or "liberty" or say "America fuck yeah!")

It's classic Bernie that he chose to run as a democrat so that he wouldn't be a spoiler ala Nader. It's just totally realistic, pragmatic, to the point. I fucking love Bernie Sanders.

Watch some Brunch with Bernie

Support Bernie here

Get corporate money out of politics! Elections should be publicly funded only! Stop lobbyists from writing laws! Free trade agreements should not supercede national laws! Corporations are not human beings! Government exists for the service of its citizens! Stop privatizing profit while leaving the government on the hook for risks and long term consequences! And so on.


05-03-15 | Waterproof

I have 3 rain jackets. They all let rain through.

Granted, the first one is actually labelled "water resistant" or "water repellant" or some such nonsense which actually means "fucking useless". But the other two are actually described as "waterproof". And they just aren't.

They seem waterproof at first. Water beads up and runs off and nothing goes through. But over time in a rain they start to get saturated, and eventually they soak through and then they just wick water straight through.

The problem is they're all some fancy waterproof/breathable technical fabric.

IT DOESN'T FUCKING WORK!

Ooo we have this new fancy fabric. NO! No you don't. You have bullshit that doesn't fucking work.

Job #1 : Actually be waterproof.

But it's lighter!, you say. Nope! Zip it! But it's breathable. Zip! Shush. But it's recycled, and rip-stop. Zip. Nope. Is it waterproof? Is it actually fucking waterproof? Like if I stand out in a rain. No, it isn't. Throw it out. It doesn't work. You're fired. Back to the drawing board.

If you want to get fancy, you could use your breathable/not-actually-waterproof material in areas that don't get very wet, such as the inside of the upper arm and the sides of the torso.

At the very least the tops of the shoulders and the upper chest need to be just plastic. Just fucking plastic like a slicker from the 50's. (it could be a separate overhanging shelf of plastic, like a duster type of thing)

Any time I'm browsing REI or whatever these days and see a jacket labelled waterproof, I think "like hell it is".


04-28-15 | Guitar and Teaching

I'm sort of vaguely trying to learn guitar again as a late night alternative to TV.

I'm at the point I always hit where I lose steam. I can play some basic stuff, but not anything too difficult. The problem is I have trouble finding fun songs to play that aren't too hard, or finding songbooks or teach-yourself books that are both fun and not too hard.

What I really want, and what I think is the right way to teach guitar to a dabbler like me is :


A book songs, in progression of difficulty

The songs need to be modern (post-60's), fun, familiar
(classic rock is pretty safe)

The songs need to be the *actual* songs.  Not simplified versions.  Not transposed versions.
Not just the chords when the real song is much more complex.

When I play it, it needs to sound like the actual recording.

No funny tunings.  I can't be bothered with that.

and so far as I know nothing like that exists.

I've got a bunch of "easy rock guitar songbooks" and they all fucking suck.

There are lots of good tabs on the internet, and I've found some good stuff to learn that way, but fuck that. The last thing I want to be doing in my relaxing time is browsing the internet trying to decide which of the 400 fucking versions of the "Heartbreaker" tab is the right one I should try to learn.

(in the past I taught myself some classical guitar, and in contrast there are lots of great classical, and even finger-picking folk guitar song books and instructional progressions that give you nice songs to learn that are actually fun to play and sound like something)

I've tried taking lessons a few times in the past and the teachers always sucked. Maybe they were good in terms of getting you to be a better player, but they were awful at making it fun.

I had a teacher who wanted me to sit with a metronome and just pick the same note over and over to the metronome to work on my meter. WTF. You're fired. All teachers want you to play scales. Nope. And then for the "fun" part they want to teach some basic rock rhythm, A-A-E-E,A-A-E-E. Nope, I'm bored. You're fired. Then I get to learn a song and it's like some basic blues I've never heard of or some fucking John Denver or something. (*)

They seem to completely fail to understand that it has to keep the student interested. That's part of your job being a teacher. I'm not trying to become a professional musician. I don't have some great passionate motivation that's going to keep me going through the boring shit and drudgery of your lessons. You have to make it fun all the time.

(* = there is some kind of weird thing where people who play music generally have horrible taste in music. It's like they pay too much attention to either the notes/chord/key or to the technical playing, neither of which actually matter much. It's the feeling, man.)

It was interesting for me to see this in ceramics. I was lucky to find a really great teacher (Bill Wilcox) who understood that it had to be fun, and that this was a bit of a lark for most of us, and some were more serious than others. Maybe he wasn't the most efficient teacher in terms of conveying maximum learning in a set time period - but he kept you coming back. We occasionally had different guest teachers, and they were way more regimented and methodical and wanted you to do drills (pull a cylinder 20 times and check the walls for evenness), and you could see half the class thinking "fuck this"

I suppose this is true of all learning for some kids. Some kids are inherently motivated, I'm going to learn because I'm supposed to, or to get into a good college, or "for my future", or to be smarter than everyone else, or whatever. But other kids see a cosine and think "wtf is that for, who cares".

I've always thought the right way to teach anything is with a goal in mind. Not just "hey learn this because you're supposed to". But "we want to build a catapult and fire it and hit a target. Okay, we're going to need to learn about angles and triangles and such...". The best/easiest/deepest learning is what you learn because you need to learn it to accomplish something that you want to do.

What I need in guitar is a series of mini goals & accomplishments. Hey here's this new song, and it's a little bit too hard for me, but it's a fucking cool song so I actually want to play it. So I practice for a while and get better, and then I can play it, yay! Then I move on to the next one. Just like good game design. And WTF it just doesn't seem to exist.


04-24-15 | Typical Email Experience

I write very careful emails with clear points and specific questions, something like :

Hello, yes blah blah some stuff.  I need to know these points :

1. What about A?

2. There is also b?

3. and finally C?

and I usually get a response like :

Yep, great!

Umm. WTF. You are fucking fired from your job, from life, from the planet, go away.

Yep to what? There were THREE questions in there. And none of them was really a yes/no question anyway. WTF.

So I'll try to be polite and send back something like -


Thanks for the response; yes, to what exactly?  Did you mean yes to A?

Also I still need to know about B & C.

and then I'll get a response like :

Ok, on B we do this and that.

Umm. Okay, that's better. We got one answer, but THERE WERE THREE FUCKING QUESTIONS. I fucking numbered them so you could count them. That means I need three answers.

Sometimes I'll get a response like :


Ramble ramble, some unrelated stuff, sort of answer maybe A and C but not exactly, some
other rambling.

Okay. Thanks for writing a lot of words, but I HAD SPECIFIC FUCKING QUESTIONS.

This is basic fucking professionalism.

Jesus christ.


04-20-15 | Vitamin D

This post is a month or two too late, since we're now into the sun-times (I hope, fingers crossed), but anyway.

Everybody knows when you move to Seattle and get SAD you have to take vitamin D. So all these years I've been taking a few Vit D pills every day in the winter.

Recently my hair has been falling out, which has never happened to me before. I was pretty sure it was just stress, but I thought hey WTF may as well get a blood test and see if anything is wrong. So I get a blood test. Everything is normal, except -

My vit D levels were way way below normal. There's a normal range (3-7) and I was like a 1.

I was like, WTF? I take 1-2 vit D pills every day.

Turns out those pills are 1000 IU each, so I was taking 1-2000 IU. I thought that was a hell of a lot (it's ten million percent of the US RDA). Nope, not a lot. My doc said I could take 8000-10,000 IU to get my levels back up to normal, then maintenance was more like 5000 IU.

So, I started taking big doses, and BOOM instant happiness. More energy, less depression.

I still fucking hate the gray and the wet. (I recently got some foot fungus from hiking on a rainy day. I hate the fucking wet. I'd like to live on Arrakis and never see a single drop of rain again in my life.) But hey with proper vit D dosing I don't want to kill myself every day. Yay.

Pound that D, yo!


04-14-15 | Bend Oregon

Bend in a nutshell :

Restaurants here write "sando" on the menu when describing sandwiches.

I'm not talking about casual order-at-the-counter places with "crazy" graphics. I mean fancier places with good food. Not conversation, written word. "House-made roast beef sando with blue cheese and carmelized onions". Sando.


Bend has two primary demographics :

The rednecks, which unfortunately still infest so much of Oregon (including, surprisingly, even quite a lot of Portland, mainly on the outside, like a nasty fiery red infection around the anus of liberal Portland). The rednecks wear flannel and baseball caps, drive big trucks, they like off-roading, beer and dogs.

The snowboarders. They wear flannel, baseball caps, drive big trucks, like snowboarding, beer and dogs.

They're actually easy to tell apart. The snowboarders wear $100 designer Helly Hansen flannel, the rednecks wear cheap Walmart flannel.

No, actually you can tell them apart because the rednecks are all so damn ugly. They're constantly angry, they don't smile at you, they stomp around and have bad posture. The contrast is severe, which brings us to the next point :


Bend is the happiest place I have ever seen in my life. It's fucking ridiculous.

For Oddworlders, it's like a town full of Bonnies. For Tash, it's a town full of Bagleys.

Everyone is clear-eyed, that bright clear eye sweetness you get from lots of exercise and being outdoors, that gives you inner peace and patience and just fixes everything. Everyone is so sweet and friendly in a real way, not in that syrupy phoney Southern way.

Kids just walk down the street. There were sweet little kids playing everywhere. People ride their bikes to the corner restaurant and just leave it on the rack unlocked.

It's a fucking utopia.


It's horrible being around those positive, sweet, wholesome people who have great life priorities and friends and seem to have fun doing anything. It makes me sick. You are everything I should be in life and am not. They're the kind of people who buy their friends presents because they actually want to. The kind of people who stand up and dance in a bar when noone else is, and just don't even think about it and laugh and have a great time. I hate you so much. It must be the way ugly girls feel when they visit LA.


Everyone in Bend is a massive alcoholic. When you order your coffee in the morning, they ask if you want a craft brewed IPA with that. There's a brewery per person.


On the down side, there is no road biking around Bend. It's supposed to be this biking mecca, but I guess just for mountain bikes. I think road biking in Oregon is probably shit everywhere. There are no nice dis-used windy roads. God I miss the California biking. I can name like 20 truly epic rides on the west coast of the US and every single one of them is in CA. (roads with lots of traffic and/or no shoulder and/or bad pavement are all disqualified; morons who put things like Hwy 1 on the "good rides" list are morons).


So many fucking stinky trucks. I love to drive around and get that fresh piney mountain air, but every fucking place you go there's some damn F350 enormous monster belching soot in your face. It seems like maybe people in Oregon like diesel more than most of the US? I dunno, I've never noticed so many damn stinky trucks in my life. I guess in Texas I wouldn't even try to open my window.



Pros :

Dry!

Sunny!

Cheap housing (relative to Seattle & Portland and of course CA)

Gorgeous (I love that dry piney stuff; better than the swampy crap west of the Cascades)

Ski all winter!

Sweet for kids (up until high school age or so, then toxic)

Great mountain biking


Cons :

Cold

No road biking

Nice people

Rednecks

Low population = less people to choose from


03-25-15 | My Chameleon

I did my own implementation of the Chameleon compression algorithm. (the original distribution is via the density project)

This is the core of Chameleon's encoder :

    cur = *fm32++; h = CHAMELEON_HASH(cur); flags <<= 1;
    if ( c->hash[h] == cur ) { flags ++; *to16++ = (uint16) h; }
    else { c->hash[h] = cur; *((uint32 *)to16) = cur; to16 += 2; }

This is the decoder :

    if ( (int16)flags < 0 ) { cur = c->hash[ *fm16++ ]; }
    else { cur = *((const uint32 *)fm16); fm16 += 2; c->hash[ CHAMELEON_HASH(cur) ] = cur; }
    flags <<= 1; *to32++ = cur;

I thought it deserved a super-simple STB-style header-only dashfuly-described implementation :

Chameleon.h

My Chameleon.h is not portable or safe or any of that jizzle. Maybe it will be someday. (Update : now builds on GCC & clang. Tested on PS4. Still not Endian-invariant.)


// Usage :

#define CHAMELEON_IMPL
#include "Chameleon.h"

Chameleon c;

Chameleon_Reset(&c);

size_t comp_buf_size = CHAMELEON_MAXIMUM_OUTPUT_SIZE(in_size);

void * comp_buf = malloc(comp_buf_size);

size_t comp_len = Chameleon_Encode(&c, comp_buf, in_buf, in_size );

Chameleon_Reset(&c);

Chameleon_Decode(&c, out_buf, in_size, comp_buf );

int cmp = memcmp(in_buf,out_buf,in_size);
assert( comp == 0 );


ADD : Chameleon2 SIMD prototype now posted : (NOTE : this is not good, do not use)

Chameleon2.h - experimental SIMD wide Chameleon
both Chameleons in a zip

The SIMD encoder is not fast. Even on SSE4 it only barely beats scalar Chameleon. So this is a dead end. Maybe some day when we get fast hardware scatter/gather it will be good (*).

(* = though use of hardware scatter here is always going to be treacherous, because hashes may be repeated, and the order in which collisions resolve must be consistent)


03-25-15 | Density - Chameleon

Casey pointed me at Density .

Density contains 3 algorithms, from super fast to slower : Chameleon, Cheetah, Lion.

They all attain speed primarily by working on U32 quanta of input, rather than bytes. They're sort of LZPish type things that work on U32's, which is a reasonable way to get speed in this modern world. (Cheetah and Lion are really similar to the old LZP1/LZP2 with bit flags for different predictors, or to some of the LZRW's that output forward hashes; the main difference is working on U32 quanta and no match lengths)

The compression ratio is very poor. The highest compression option (Lion) is around LZ4-fast territory, not as good as LZ4-hc. But, are they Pareto? Is it a good space-speed tradeoff?

Well, I can't build Density (I use MSVC) so I can't test their implementation for space-speed.

Compressed sizes :


lzt99 :
uncompressed       24,700,820

density :
c0 Chameleon       19,530,262
c1 Cheetah         17,482,048
c2 Lion            16,627,513

lz4 -1             16,193,125
lz4 -9             14,825,016

Oodle -1 (LZB)     16,944,829
Oodle -2 (LZB)     16,409,913

Oodle LZNIB        12,375,347

(lz4 -9 is not competitive for encode time, it's just to show the level of compression you could get at very fast decode speeds if you don't care about encode time ; LZNIB is an even more extreme case of the same thing - slow to encode, but decode time comparable to Chameleon).

To check speed I did my own implementation of Chameleon (which I believe to be faster than Density's, so it's a fair test). See the next post to get my implementation.

The results are :

comp_len = 19492042
Chameleon_Encode_Time : seconds:0.0274 ticks per: 1.919 mbps : 901.12
Chameleon_Decode_Time : seconds:0.0293 ticks per: 2.050 mbps : 843.31

round trip time = 0.05670
I get a somewhat smaller file size than Density's version for unknown reason.

Let's compare to Oodle's LZB (an LZ4ish) :


Oodle -1 :

24,700,820 ->16,944,829 =  5.488 bpb =  1.458 to 1
encode           : 0.061 seconds, 232.40 b/kc, rate= 401.85 mb/s
decode           : 0.013 seconds, 1071.15 b/kc, rate= 1852.17 mb/s

round trip time = 0.074

Oodle -2 :

24,700,820 ->16,409,913 =  5.315 bpb =  1.505 to 1 
encode           : 0.070 seconds, 203.89 b/kc, rate= 352.55 mb/s
decode           : 0.014 seconds, 1008.76 b/kc, rate= 1744.34 mb/s

round trip time = 0.084

lzt99 is a collection of typical game data files.

We can test on enwik8 (text/html) too :


Chameleon :

enwik8 :
Chameleon_Encode_Time : seconds:0.1077 ticks per: 1.862 mbps : 928.36
Chameleon_Decode_Time : seconds:0.0676 ticks per: 1.169 mbps : 1479.08
comp_len = 61524068

Oodle -1 :

enwik8 : 
100,000,000 ->57,267,299 =  4.581 bpb =  1.746 to 1 
encode           : 0.481 seconds, 120.17 b/kc, rate= 207.79 mb/s
decode           : 0.083 seconds, 697.58 b/kc, rate= 1206.19 mb/s

here Chameleon is much more compelling. It's competitive for size & decode speed, not just encode speed.

Commentary :

Any time you're storing files on disk, this is not the right algorithm. You want something more asymmetric (slow compress, fast decompress).

I'm not sure if Cheetah and Lion are Pareto for round trip time. I'd have to test speed on a wider set of sample data.

When do you actually want a compressor that's this fast and gets so little compression? I'm not sure.


03-15-15 | LZ Literal Correlation Images

I made some pictures.

I'm showing literal correlation by making an image of the histogram. That is, given an 8-bit predictor, you tally of each event :


int histo[256][256]

histo[predicted][value] ++

then I scale the histo so the max is at 255 and make it into an image.

Most of the images that I show are in log scale, otherwise all the detail is too dark, dominated by a few peaks. I also sometimes remove the predicted=value line, so that the off axis detail is more visible.

Let's stop a moment and look t what we can see in these images.

This is a literal histo of "lzt99" , using predicted = lolit (last offset literal; the rep0len1 literal). This is in log scale, with the diagonal removed :

In my images y = prediction and x = current value. x=0, y=0 is in the upper left instead of the lower left where it should be because fucking bitmaps are retarded (everyone is fired, left handed coordinate systems my ass).

The order-0 probability is the vertical line sum for each x. So any vertical lines indicate just strong order-0 correlations.

Most files are a mix of different probability sources, which makes these images look a sum of different contibuting factors.

The most obvious factor here is the diagonal line at x=y. That's just a strong value=predicted generator.

The red blob is a cluster of events around x and y = 0. This indicates a probability event that's related to |x+y| being small. That is, the sum, or length, or something tends to be small.

The green shows a square of probabilities. A square indicates that for a certain range of y's, all x's are equally likely. In this case the range is 48-58. So if y is in 48-58, then any x in 48-58 is equally likely.

There are similar weaker squarish patterns all along the diagonal. Surprisingly these are *not* actually at the binary 8/16 points you might expect. They're actually in steps of 6 & 10.

The blue blobs are at x/y = 64/192. There's a funny very specific strong asymmetric pattern in these. When y = 191 , it predicts x=63,62,61,60 - but NOT 64,65,66. Then at y=192, predict x=64,65,66, but not 63.

In addition to the blue blobs, there are weak dots at all the 32 multiples. This indicates that when y= any multiple of 32, there's a generating event for x = any multiple of 32. (Note that in log scale, these dots look more important than they really are.). There are also some weak order-0 generators at x=32 and so on.

There's some just general light gray background - that's just uncompressible random data (as seen by this model anyway).


Here's a bunch of images : (click for hi res)

rawrawraw subsubsub xorxorxor
loglogNDlinND loglogNDlinND loglogNDlinND
Fez LO
Fez O1
lzt24 LO
lzt24 O1
lzt99 LO
lzt99 O1
enwik7 LO
enwik7 O1

details :

LO means y axis (predictor) is last-offset-literal , in an LZ match parse. Only the literals coded by the LZ are shown.

O1 means y axis is order1 (previous byte). I didn't generate the O1 from the LZ match parse, so it's showing *all* bytes in the file, not just the literals from the LZ parse.

"log" is just log-scale of the histo. An octave (halving of probability) is 16 pixel levels.

"logND" is log without the x=y diagonal. An octave is 32 pixel levels.

"linND" is linear, without the x=y diagonal.

"raw" means the x axis is just the value. "xor" means the x axis is value^predicted. "sub" means the x axis is (value-predicted+127).

Note that raw/xor/sub are just permutations of the values along a horizontal axis, they don't change the values.


Discussion :

The goal of a de-correlating transform is to create vertical lines. Vertical lines are order-0 probability peaks and can be coded without using the predictor as context at all.

If you use an order-0 coder, then any detail which is not in a vertical line is an opportunity for compression that you are passing up.

"Fez" is obvious pure delta data. "sub" is almost a perfect model for it.

"lzt24" has two (three?) primary probability sources. One is almost pure "sub" x is near y data.

The other sources, however, do not do very well under sub. They are pure order-0 peaks at x=64 and 192 (vertical lines in the "raw" image), and also those strange blobs of correlation at (x/y = 64 and 192). The problem is "sub" turns those vertical lines into diagonal lines, effectively smearing them all over the probability spectrum.

A compact but full model for the lzt24 literals would be like this :


is y (predictor) near 64 or 192 ?

if so -> strongly predict x = 64 or 192

else -> predict x = y or x = 64 or 192 (weaker)

lzt99, being more heterogenous, has various sources.

"xor" takes squares to squares. This works pretty well on text.

In general, the LO correlation is easier to model than O1.

The lzt99 O1 histo in particular has lots of funny stuff. There are bunch of non-diagonal lines, indicating things like x=y/4 patterns, which is odd.


03-04-15 | LZ Match Finding Redux

Some time ago I did a study of LZ match finders. Since then I've somewhat changed my view. This will be a bit hand wavey. Previous posts :

cbloom rants 09-30-11 - String Match Results Part 5 + Conclusion
cbloom rants 11-02-11 - StringMatchTest Release
cbloom rants 09-24-12 - LZ String Matcher Decision Tree

From fast to slow :

All my fast matchers now use "cache tables". In fact I now use cache tables all the way up to my "Normal" level (default level; something like zip -7).

With cache tables you have a few primary parameters :

hash bits
hash ways
2nd hash

very fastest :
0 :
hash ways =1
2nd hash = off

1 :
hash ways = 2
2nd hash = off

2 :
hash ways = 2
2nd hash = on

...

hash ways = 16
2nd hash = on

The good thing about cache tables is the cpu cache coherency. You look up by the hash, and then all your possible matches are right there in one cache line. (there's an option of whether you store the first U32 of each match right there in the cache table to avoid a pointer chase to check the beginning of the match).

Cache tables are superb space-speed tradeoff up until ways hits around 16, and then they start to lose to hash-link.

Hash-link :

Hash link is still good, but can be annoying to make fast (*) and does have bad degenerate case behavior (when you have a bad hash collision, the links on that chain get overloaded with crap).

(* = you have to do dynamic amortization and shite like that which is not a big deal, but ugly; this is to handle the incompressible-but-lots-of-hash-collisions case, and to handle the super-compressible-lots-of- redundant-matches case).

The good thing about hash-link is that you are strictly walking matches in increasing offset order. This means you only need to consider longer lengths, which helps break the O(N^2) problem in practice (though not in theory). It also gives you a very easy way to use a heuristic to decide if a match is better or not. You're always doing a simple compare :

previous best match vs.
new match with
higher offset
longer length
which is a lot simpler than something like the cache table case where you see your matches in random order.

Being rather redundant : the nice thing about hash-link is that any time you find a match length, you know absolutely that you have the lowest offset occurance of that match length.

I'm not so high on Suffix Tries any more.

*if* your LZ just needs the longest length at each position, they're superb. If you actually need the best match at every position (traditional optimal parse), they're superb. eg. if you were doing LZSS with large fixed-size offsets, you just want to find the longest match all the time - boom Suffix Trie is the answer. They have no bad degenerate case, that's great.

But in practice on a modern LZ they have problems.

The main problem is that a Suffix Trie is actually quite bad at finding the lowest offset occurance of a short match. And that's a very important thing to be good at for LZ. The problem is that a proper ST with follows is doing its updates way out deep in the leaves of the tree, but the short matches are up at the root, and they are pointing at the *first* occurance of that substring. If you update all parents to the most recent pointer, you lose your O(N) completely.

(I wrote about this problem before : cbloom rants 08-22-13 - Sketch of Suffix Trie for Last Occurance )

You can do something ugly like use a suffix trie to find long matches and a hash->link with a low walk limit to find the most recent occurance of short matches. But bleh.

And my negativity about Suffix Tries also comes from another point :

Match finding is not that important. Well, there are a lot of caveats on that. On structured data (not text), with a pos-state-lastoffset coder like LZMA - match finding is not that important. Or rather, parsing is more important. Or rather, parsing is a better space-speed tradeoff.

It's way way way better to run an optimal parse with a crap match finder (even cache table with low ways) than to run a heuristic parse with great match finder. The parse is just way more important, and per CPU cost gives you way more win.

And there's another issue :

With a forward optimal parse, you can actually avoid finding matches at every position.

There are a variety of ways to get to skip ahead in a forward optimal parse :


Any time you find a very long match -
 just take it and skip ahead
 (eg. fast bytes in LZMA)

 this can reduce the N^2 penalty of a bad match finder

When you are not finding any matches -
 start taking multi-literal steps
 using something like (failedMatches>>6) heuristic

When you find a long enough rep match -
 just take it
 and this "long enough" can be much less than "fast bytes"
 eg. fb=128 for skipping normal matches
 but you can just take a rep match at length >= 8
 which occurs much more often

the net result is lots of opportunity for more of a "greedy" type of match finding in your optimal parser, where you don't need every match.

This means that good greedy-parse match finders like hash-link and Yann's MMC ( my MMC note ) become interesting again (even for optimal parsing).


03-02-15 | Oodle LZ Pareto Frontier

I made a chart.

I'm showing the "total time to load" (time to load off disk at a simulated disk speed + time to decompress). You always want lower total time to load - smaller files make the simulated load time less, faster decompression make the decompress time less.


total_time_to_load = compressed_size / disk_speed + decompress_time

It looks neatest in the form of "speedup". "speedup" is the ratio of the effective speed vs. the speed of the disk :


effective_speed = raw_size / total_time_to_load

speedup = effective_speed / disk_speed

By varying disk speed you can see the tradeoff of compression ratio vs. cpu usage that makes different compressors better in different application domains.

If we write out what speedup is :


speedup = raw_size / (compressed_size + decompress_time * disk_speed)

speedup = 1 / (1/compression_ratio + disk_speed / decompress_speed)

speedup ~= harmonic_mean( compression_ratio , decompress_speed / disk_speed )

we can see that it's a "harmonic lerp" between compression ratio on one end and decompress speed on the other end, with the simulated disk speed as lerp factor.

These charts show "speedup" vs. log of disk_speed :

(the log is log2, so 13 is a disk speed of 8192 mb/s).

On the left side, the curves go flat. At the far left (x -> -infinity, disk speed -> 0) the height of each curve is proportional to the compression ratio. So just looking at how they stack up on the far left tells you the compression ratio performance of each compressor. As you go right more, decompression speed becomes more and more important and compressed size less so.

With ham-fisted-shading of the regions where each compressor is best :

The thing I thought was interesting is that there's a very obvious Pareto frontier. If I draw a tangent across the best compressors :

Note that at the high end (right), the tangent goes from LZB to "memcpy" - not to "raw". (raw is just the time to load the raw file from disk, and we really have to compare to memcpy because all the compressors fill a buffer that's different from the IO buffer). (actually the gray line I drew on the right is not great, it should be going tangent to memcpy; it should be shaped just like each of the compressors' curves, flat on the left (compressed size dominant) and flat on the right (compress time dominant))

You can see there are gaps where these compressors don't make a complete Pareto set; the biggest gap is between LZA and LZH which is something I will address soon. (something like LZHAM goes there) And you can predict what the performance of the missing compressor should be.

It's also a neat way to test that all the compressors are good tradeoffs. If the gray line didn't make a nice smooth curve, it would mean that some compressor was not doing a good job of hitting the maximum speed for its compression level. (of course I could still have a systematic inefficiency; like quite possibly they're all 10% worse than they should be)


ADDENDUM :

If instead of doing speedup vs. loading raw you do speedup vs. loading raw + memcpy, you get this :

The nice thing is the right-hand asymptotes are now constants, instead of decaying like 1/disk_speed.

So the left hand y-intercepts (disk speed -> 0) show the compression ratio, and the right hand y-intercepts side show the decompression speed (disk speed -> inf), and in between shows the tradeoff.


03-02-15 | LZ Rep-Match after Match Strategies

Tiny duh note.

When I did the Oodle LZH I made a mistake. I used a zip-style combined codeword. Values 0-255 are a literal, and 256+ contain the log2ish of length and offset. The advantage of this is that you only have one Huff table and just do one decode, then if it's a match you also fetch some raw bits. It also models length-offset correlation by putting them in the codeword together. (this scheme is missing a lot of things that you would want in a more modern high compression LZ, like pos-state patterns and so on).

Then I added "rep matches" and just stuck them in the combined codeword as special offset values. So the codeword was :

{
256 : literal
4*L : 4 rep matches * L length slots (L=8 or whatever = 32 codes)
O*L : O offset slots * L length slots (O=14 and L = 6 = 84 codes or whatevs)
= 256+32+84 = 372
}

The problem is that rep-match-0 can never occur after a match. (assuming you write matches of maximum length). Rep-match-0 is quite important, on binary/structured files it can have very high probability. By using a single codeword which contains rep-match-0 for all coding events, you are incorrectly mixing the statistics of the after match state (where rep-match-0 has zero probability) and after literal state (where rep-match-0 has high probability).

A quick look at the strategies for fixing this :

1. Just use separate statistics. Keep the same combined codeword structure, but have two entropy tables, one for after-match and one for after-literal. This would also let you code the literal-after-match as an xor literal with separate statistics for that.

Whether you do xor-lit or not, there will be a lot of shared probability information between the two entropy tables, so if you do static Huffman or ANS probability transmission, you may need to use the cross-two-tables-similary in that transmission.

In a static Huffman or ANS entropy scheme if rep-match-0 never occurs in the after-match code table, it will be given a codelen of 0 (or impossible) and won't take any code space at all. (I guess it does take a little code space unless you also explicitly special case the knowledge that it must have codelen 0 in your codelen transmitter)

This is the simplest version of the more general case :

2. Context-code the rep-match event using match history. As noted just using "after match" or "after literal" as the context is the simplest version of this, but more detailed history will also affect the rep match event. This is the natural way to fix it in an adaptive arithmetic/ANS coder which uses context coding anyway. eg. this is what LZMA does.

Here we aren't forbidding rep-match after match, we're just using the fact that it never occur to make its probability go to 0 (adaptively) and thus it winds up taking nearly zero code space. In LZMA you actually can have a rep match after match because the matches have a max length of 273, so longer matches will be written as rep matches. Ryg pointed out that after a match that's been limitted by max-length, LZMA should really consider the context for the rep-match coding to be like after-literal , not after-match.

In Oodle's LZA I write infinite match lengths, so this is simplified. I also preload the probability of rep-match in the after-match contexts to be near zero. (I actually can't preload exactly zero because I do sometimes write a rep-match after match due to annoying end-of-buffer and circular-window edge cases). Preconditioning the probability saves the cost of learning that it's near zero, which saves 10-100 bytes.

3. Use different code words.

Rather than relying on statistics, you can explicitly use different code words for the after-match and after-literal case. For example in something like an LZHuf as described above, just use a codeword in the after-match case that omits the rep0 codes, and thus has a smaller alphabet.

This is most clear in something like LZNib. LZNib has three events :

LRL
Match
Rep Match
So naively it looks like you need to write a trinary decision at each coding (L,M,R). But in fact only two of them are ever possible :

After L - M or R
  cannot write another L because we would've just made LRL longer

After M or R - L or M
  cannot write an R because we wouldn't just made match longer

So LZNib writes the binary choices (M/R) after L and (L/M) after M or R. Because they're always binary choices, this allows LZNib to use the simple single-divider method of encoding values in bytes .

4. Use a combined code word that includes the conditioning state.

Instead of context modeling, you can always take previous events that you need context from and make a combined codeword. (eg. if you do Huffman on the 16-bit bigram literals, you get order-1 context coding on half the literals).

So we can make a combined codeword like :

{
(LRL 0,1,2,3+) * (rep matches , normal matches) * (lengths slots)
= 4 * (4 + 14) * 8 = 576
}

Which is a pretty big alphabet, but also combined length slots so you get lrl-offset-length correlation modeling as well.

In a combined codeword like this you are always writing a match, and any literals that precede it are written with an LRL (may be 0). The forbidden codes are LRL=0 and match =rep0 , so you can either just let those get zero probabilities, or explicitly remove them from the codeword to reduce the alphabet. (there are also other forbidden codes in normal LZ parses, such as low-length high-offset codes, so you would similarly remove or adjust those)

A more minimal codeword is just

{
(LRL 0,1+) * (rep-match-0, any other match)
= 2 * 2 = 4
}

which is enough to get the rep-match-0 can't occur after LRL 0 modeling. Or you can do anything between those two extremes to choose an alphabet size.


More :

04/2012 to 02/2015
06/2011 to 04/2012
01/2011 to 06/2011
10/2010 to 01/2011
01/2010 to 10/2010
01/2009 to 12/2009
10/2008 to 01/2009
08/2008 to 10/2008
03/2008 to 08/2008
11/2007 to 03/2008
07/2006 to 11/2007
12/2005 to 07/2006
06/2005 to 12/2005
01/1999 to 06/2005

back to cbloom.com