close

Moved

Moved. See https://slott56.github.io. All new content goes to the new site. This is a legacy, and will likely be dropped five years after the last post in Jan 2023.

Showing posts with label regular expressions. Show all posts
Showing posts with label regular expressions. Show all posts

Tuesday, November 1, 2016

Handling Irregular File Formats

This is a common issue. We have a file which was printed for human consumption. Consequently, it has many different kinds of lines.

These are the two kinds of lines of interest:

900296268 4/9/16 Mobility, Data Mining and Privacy Expired

900295204 4/1/16 Pro .NET Best Practices
Expired

The first is a single physical line.  It has four data elements. The second is two physical lines. The first has three data elements.

There are a number of other noise lines in the file which must be filtered out.

The first "solution" pitched to me could be summarized with this:

Move "Expired" on a line by itself to the previous line

That was part of the email subject line. The body of the email was some whining about regular expressions. Which I mostly ignored. Multiline regular expressions are their own kind of challenge.

We (should) all know this: https://blog.codinghorror.com/regular-expressions-now-you-have-two-problems/

Let's do this without regular expressions. There are two things we need to know. One is buffering, and the other is the best way to split each line. It turns out that there are spaces as well as tabs, and can can, by splitting on tabs, make a lot of progress.

Instead of the good approach, I'll pick the other approach that doesn't involve splitting on tabs.

Here's the simulated file, with data lightly redacted.

sample_text = '''
"Your eBooks"

Show 200



Page: 1



Order # Date Title Formats Status Download
-------
xxx315605 9/30/16 R for Cloud Computing Available



xxx304790 6/21/16 Java XML and JSON Available
xxx304790 6/21/16 Accelerated DOM Scripting with Ajax, APIs, and Libraries Available

xxx291633 2/28/16 Practical Google Analytics and Google Tag Manager for Developers
Expired
'''

It's not perfectly obvious (because of line wrapping) but there are three examples of the "all-complete-in-one-line" records. There's one example of the "two-lines" record.

Rather than mess with the file, we'll build a file-like object with our sample data.

import io
file_like_object = io.StringIO(sample_text)

I like this because it lets me write proper unit test cases.

The file has four kinds of lines:

  • Complete Records
  • Record Headers (without Available/Expired)
  • Record Trailers (only Available/Expired)
  • Noise

We'll create some decision rules for the two obvious kinds of file lines: complete records and trailers. We can deduce the headers based on a simple adjacency rule: they precede a trailer. The fourth kind of lines are those which are possible headers but are not immediately prior to a trailer.

def complete(words):
    return len(words) > 3 and words[-1] in ('Available', 'Expired')

def trailer(words):
    return len(words) == 1 and words[0] in ('Available', 'Expired')    

We can spot these two kinds of lines easily. The other kinds require a Buffered Generator.

def emit_clean(source):
    header = None
    for line in (line.strip() for line in source):
        words = [w.strip() for w in line.split()]
        if len(words) == 0: continue
        if complete(words):
            yield(line)
            header = None
        elif trailer(words) and header:
            yield(header + '\t\t' + line)
            header = None
        else:
            # Possible header
            # print('??', line)
            header = line

The Buffered Generator is a way to implement a "look ahead one item" (LA1) algorithm. We do this by buffering rows. When we get to the next row we can use the buffered row and the current row to implement the look-ahead logic.

The actual implementation uses a look-behind buffer, header.

The (line.strip() for line in source) generator expression strips away leading and trailing spaces. This gets rid of the newline characters at the end of each input line.

The default behavior of split() is to split on whitespace. In this case, it will create a number of words for complete records or header records, and a single word for a trailer record. If we had split on tab characters, some of this logic would be simplified.

That's left as an exercise for the reader.

If the len(words) is zero, the line is blank.

If the line matches the complete() function, we can yield it as one of the iterable results of the generator function. We also clear out the look-behind buffer, header.

If the line is a trailer and we have a buffered look-behind line, this is the two-physical-line case. We can assemble a complete record and emit it.

Otherwise, we don't know what the line is. It's a possible header line, so we'll save it for later examination.

This algorithm involves no regular expressions.

With Regular Expressions


An alternative would use three regular expressions to match the three kinds of lines.

import re
all_one_pat = re.compile("(.*)\t(.*)\t(.*)\t\t((?:Available)|(?:Expired))")
header_pat = re.compile("(.*)\t(.*)\t(.*)")
trailer_pat = re.compile("((?:Available)|(?:Expired))")

This has the advantage that we can then use the groups() method of each successful match to emit useful data instead of text which needs subsequent parsing. This leads to a slightly more robust process.

def emit_clean2(source):
    header = None
    for line in (line.strip() for line in source):
        if len(line) == 0: continue
        all_one_match = all_one_pat.match(line)
        header_match = header_pat.match(line)
        trailer_match = trailer_pat.match(line)
        if all_one_match:
            yield(all_one_match.groups())
            header = None
        elif header_match and not header:
            header = header_match.groups()
        elif trailer_match and header:
            yield header + trailer_match.groups()
            header = None
        else:
            pass # noise

The essential processing involves seeing which of the regular expressions match the line at hand. If it's all-in-one, this is good. We can yield the groups of meaningful data. If it's a header, we can save the groups. If it's a trailer, we can combine header and trailer groups and yield the composite.

This has the advantage of explicitly rejecting noise lines instead of treating each noise line as a possible header.

Handling Irregular File Formats

This is a common issue. We have a file which was printed for human consumption. Consequently, it has many different kinds of lines.

These are the two kinds of lines of interest:

900296268 4/9/16 Mobility, Data Mining and Privacy Expired

900295204 4/1/16 Pro .NET Best Practices
Expired

The first is a single physical line.  It has four data elements. The second is two physical lines. The first has three data elements.

There are a number of other noise lines in the file which must be filtered out.

The first "solution" pitched to me could be summarized with this:

Move "Expired" on a line by itself to the previous line

That was part of the email subject line. The body of the email was some whining about regular expressions. Which I mostly ignored. Multiline regular expressions are their own kind of challenge.

We (should) all know this: https://blog.codinghorror.com/regular-expressions-now-you-have-two-problems/

Let's do this without regular expressions. There are two things we need to know. One is buffering, and the other is the best way to split each line. It turns out that there are spaces as well as tabs, and can can, by splitting on tabs, make a lot of progress.

Instead of the good approach, I'll pick the other approach that doesn't involve splitting on tabs.

Here's the simulated file, with data lightly redacted.

sample_text = '''
"Your eBooks"

Show 200



Page: 1



Order # Date Title Formats Status Download
-------
xxx315605 9/30/16 R for Cloud Computing Available



xxx304790 6/21/16 Java XML and JSON Available
xxx304790 6/21/16 Accelerated DOM Scripting with Ajax, APIs, and Libraries Available

xxx291633 2/28/16 Practical Google Analytics and Google Tag Manager for Developers
Expired
'''

It's not perfectly obvious (because of line wrapping) but there are three examples of the "all-complete-in-one-line" records. There's one example of the "two-lines" record.

Rather than mess with the file, we'll build a file-like object with our sample data.

import io
file_like_object = io.StringIO(sample_text)

I like this because it lets me write proper unit test cases.

The file has four kinds of lines:

  • Complete Records
  • Record Headers (without Available/Expired)
  • Record Trailers (only Available/Expired)
  • Noise

We'll create some decision rules for the two obvious kinds of file lines: complete records and trailers. We can deduce the headers based on a simple adjacency rule: they precede a trailer. The fourth kind of lines are those which are possible headers but are not immediately prior to a trailer.

def complete(words):
    return len(words) > 3 and words[-1] in ('Available', 'Expired')

def trailer(words):
    return len(words) == 1 and words[0] in ('Available', 'Expired')    

We can spot these two kinds of lines easily. The other kinds require a Buffered Generator.

def emit_clean(source):
    header = None
    for line in (line.strip() for line in source):
        words = [w.strip() for w in line.split()]
        if len(words) == 0: continue
        if complete(words):
            yield(line)
            header = None
        elif trailer(words) and header:
            yield(header + '\t\t' + line)
            header = None
        else:
            # Possible header
            # print('??', line)
            header = line

The Buffered Generator is a way to implement a "look ahead one item" (LA1) algorithm. We do this by buffering rows. When we get to the next row we can use the buffered row and the current row to implement the look-ahead logic.

The actual implementation uses a look-behind buffer, header.

The (line.strip() for line in source) generator expression strips away leading and trailing spaces. This gets rid of the newline characters at the end of each input line.

The default behavior of split() is to split on whitespace. In this case, it will create a number of words for complete records or header records, and a single word for a trailer record. If we had split on tab characters, some of this logic would be simplified.

That's left as an exercise for the reader.

If the len(words) is zero, the line is blank.

If the line matches the complete() function, we can yield it as one of the iterable results of the generator function. We also clear out the look-behind buffer, header.

If the line is a trailer and we have a buffered look-behind line, this is the two-physical-line case. We can assemble a complete record and emit it.

Otherwise, we don't know what the line is. It's a possible header line, so we'll save it for later examination.

This algorithm involves no regular expressions.

With Regular Expressions


An alternative would use three regular expressions to match the three kinds of lines.

import re
all_one_pat = re.compile("(.*)\t(.*)\t(.*)\t\t((?:Available)|(?:Expired))")
header_pat = re.compile("(.*)\t(.*)\t(.*)")
trailer_pat = re.compile("((?:Available)|(?:Expired))")

This has the advantage that we can then use the groups() method of each successful match to emit useful data instead of text which needs subsequent parsing. This leads to a slightly more robust process.

def emit_clean2(source):
    header = None
    for line in (line.strip() for line in source):
        if len(line) == 0: continue
        all_one_match = all_one_pat.match(line)
        header_match = header_pat.match(line)
        trailer_match = trailer_pat.match(line)
        if all_one_match:
            yield(all_one_match.groups())
            header = None
        elif header_match and not header:
            header = header_match.groups()
        elif trailer_match and header:
            yield header + trailer_match.groups()
            header = None
        else:
            pass # noise

The essential processing involves seeing which of the regular expressions match the line at hand. If it's all-in-one, this is good. We can yield the groups of meaningful data. If it's a header, we can save the groups. If it's a trailer, we can combine header and trailer groups and yield the composite.

This has the advantage of explicitly rejecting noise lines instead of treating each noise line as a possible header.

Tuesday, May 26, 2015

Regular Expression "Hell"

Actual quote: "they spend a lot of time maintaining regular expressions. So, what are the alternatives to regular expression hell?"

Regular Expression Hell? It's a thing?

I have several thoughts:
  1. Do you have metrics to support "a lot"?  I doubt it. It's very difficult to tease RE maintenance away from code maintenance. Unless you have RE specialists. Maybe there's an RE organization that parallels the DBA org. DBA's write SQL. RE specialists write RE's. If that was true, I could see that you would have metrics, and could justify "a lot." Otherwise, I suspect this is hyperbole. There's frustration, true.
  2. REs are essential to programming.  It's hard to express how fundamental they are. I would suggest that programmers who have serious trouble with RE's have serious trouble with other aspects of the craft, and might need remedial training in RE's (and other things.) There's no shame in getting some training. There are a lot of books that can help. Claiming that there's no time for training (or no budget) is what created RE Hell to begin with. It's a trivial problem to solve. You can spend 16 hours fumbling around, or stop fumbling, spend 16 hours learning, and then press forward with a new skill. The choice is yours.
  3. REs are simply a variant on conventional set theory. They're not hard at all. Set theory is essential to programming, so are RE's. It's as fundamental as boolean algebra. It's as fundamental as getting a loop to terminate properly. It's as fundamental as copy-and-paste from the terminal window. 
  4. REs are universal because they solve a number of problems better than any other technology. Emphasis on better than ANY alternative. RE's are baked into the syntax of languages like awk and perl. They're universal because no one has ever built a sensible alternative. If you want to see even more baked-in regular expression goodness, learn SNOBOL4
REs are essential. Failure to master REs suggests failure to learn the fundamentals.

RE Hell is like Boolean Algebra Hell. It's like Set Theory Hell. It's like Math Library Hell. It's like Uninitialized Variables Hell. These are things you create through a kind of intentional ignorance.

I'm sorry to sound harsh. But I'm unsympathetic.

The initial regex in question? r"[\( | \$ | \/ |]". This indicates a certain lack of familiarity with the basics. It looks like it started as r"\(|\$|/" and someone put in spaces (perhaps they intended to use the verbose option when compiling it) and/or wrapped the whole in []'s. After trying the []'s, it appeared to work and they called it done.

The email asked (sort of trivially) if it was true that the last pipe was extraneous. Um. Yes. But.

Follow-up

The hard parts are (1) trying to figure out what the question really is. Why did they remove just the last pipe character? What were they trying to do? What's the goal? Then (2) trying to figure out how much tutorial background is required to successfully answer whatever question is really being asked. A response of r"[\(\$/]" seems like it might not actually be helpful. Acting a magic oracle that emits mysterious answers would only perpetuate the reigning state of confusion.

The follow-up requests for clarification resulted in (1) an exhaustive list of every book that seems to mention regex, (2) a user story that was far higher level than the context of regex questions. It's difficult to help when there's no focus. Every Book. Generalized "matching" of "data."

The Python connection? Can't completely parse that out, either, It appears that this is part of an ETL pipeline. I can't be sure because the initial user story made so little sense.

Attempts to discuss the supplied user story about "matching" and "data" -- predictably -- lead nowhere. It was stopped at "Some of the problems ... aren’t just typos and misspellings." Wait. What? What are they then? If they're not misspellings, what are they? Fraud? Hacking attempts? Denial of Service attacks by tying up time in some matching algorithm?

It's a misspelling. It can't be anything else. Ending the conversation by claiming otherwise is a strange and self-defeating approach to redesigning the software.

More Follow-up

At this point, we seem to be narrowing the domain of discussion to "As time goes on, we have accumulated a lot of the 'standard mistakes'. The question that need help w/ [sic] is how to manage all the code for these 'common mistakes'?" This question was provided in lieu of an actual user story. Lack of a story might mean that we're not interested in actually solving the data matching problem. Instead we're narrowly focused on sprinkling Faerie Dust all over the regexes to make them behave better.

They don't want an alternative to regexes because the problems "aren't just typos and misspellings." They want the regex without the regex hell. 

Thursday, May 21, 2009

Name Matching Alternatives

The users want to locate people by last name.  They want flexible matching.  That's not very hard.

The DBA wants to do some wild-card searches efficiently.  The DBA may not be responding to the users actual request, making this more complex than it needs to be.

I'm not in contact with the users, so I don't know the real requirements.  I'm hearing this through the DBA-filter ("all singing, all dancing, all SQL".)  I may also be hearing this through IT management filter ("only use technology I recognize from my programming days".)

In my experience, wild-card searches are rarely the user's first choice.  They want more flexible matching.  While the SQL LIKE-clause is one solution that might work, it is rarely what the users really want.

The DBA knows that the SQL LIKE-clause effectively defeats indexing and forces row-by-row comparison.  And we all know that row-by-row processing is evil.

Premature Optimization

Question 1.  Is this premature optimization?   

There's no way to tell.  The database server may be beefy enough and the query rare enough that a basic LIKE-clause regular expression will work just fine.

Step 1.  Benchmark this baseline solution.

As Fast as Possible -- in SQL

One way to find names quickly is to denormalize the data base.  In addition to the proper names, also store the soundex of the name.  Since this is stored, and there's no function call in the WHERE clause, and this is fully indexed, it will find "similar-sounding" names very quickly.

Soundex has limitations, so some folks use metaphone.  The principle is the same.  When inserting or updating the name, also insert (or update) the metaphone of the name.

This, BTW, does not involve any wild-card.  Except in unusual cases, it always returns a set of candidates.  And the set of candidates is a better fit than any wild-card search.   More focused, and the whole name is considered.

Step 2.  Prototype the soundex solution.  It's hard to explain, and impossible to visualize.  Actual result sets make it concrete.

Throw Memory At It

Here's an alternative that works really well.  

Stop using the database.

Don't waste brain cells trying to write this kind of super-flexible search in SQL. It's better done in code.  Write a simple materialized view with name and PK and nothing else.  Create the smallest possible table that can be used just for name matching -- nothing else in this table.  It's little more than an index.

Write a simple web service that queries this physically small table, doing a search algorithm.  The web service will locate near-matches in this small table.  It could return full rows for the top matches, or simply return the names and PK's for users to pick from. 

You have several candidate algorithms for this server.  A wisely-written web service can use a combination of algorithms and return a match score along with the names and PK's.
Web Service for Wildcards

An alternative web service can query the name/PK table using a nice regular expression library.  Since RE syntax can be complex, you would translate from a user-friendly syntax to a proper RE syntax.  

For instance, the LIKE-like syntax can be reformulated to proper RE syntax.  The %'s become .* and the _'s become .'s.  Or perhaps you offer your users shell-like syntax.  In this case, the *'s become .* and the ?'s become .'s.

Either way, the user's wild-card becomes a proper regular expression.  The web service queries the table, matching all input against the RE.  The service could return full rows for the top matches, or simply return the names and PK's for users to pick from. 

This little web service can be granted a large amount of memory to cache large row sets.  Boy will it be fast.

Also, depending on the pace of change in the underlying table, it may be possible for this service to query all names into a cache once every few minutes.   Perhaps it can do this by first making a SQL request to refresh the materialized view and then a query to fetch the updated view into memory.

What the DBA wants

The DBA wants some magical pixie dust that somehow makes a query with a LIKE clause use an index and behave like other properly indexed columns. 

The actual email enumerated four of the possible ways a LIKE clause could be used.  I'm guessing the hope was that somehow the enumeration of a subset of candidate LIKE clauses would help locate the pixie dust.

Here's my advice.  If this magical LIKE clause feature already existed, it would be in the DBA guide.  Since it isn't in the DBA guide, perhaps it doesn't exist.   Enumerating four use cases (name, *name, name* and *name*) doesn't help, it's still not going to work out well.  Remember, SQL's been around in this form for decades; the LIKE clause continues to be a challenge.

First, benchmark.  Second, offer the users soundex.  Then, well, you've got work to do.