Assume a series of individual data bytes sent by an application is slowly flowing into an established TCP connection. Normally TCP will wait a minimum period of time for some data bytes to pile up in the local buffer, and then create a TCP segment with all those buffered data bytes and send it. This is obviously much more efficient than creating a new segment for every single data byte as it arrives in the buffer, at the cost of slight latency. A segment will be created sooner than that if the Max Segment Size (MSS) number of bytes has been received in the buffer, and in that case TCP will immediately send a completely full segment.
However when the application specifies a PSH for a particular TCP connection, as each data byte arrives it is immediately placed in a segment and sent by TCP, and the PSH flag is also set in the created segment. This indicates to the receiver that a PSH is in effect from the other end, segments carrying very small amounts of data can be expected, and to not perform any buffering on its end either: the receiver's TCP immediately pushes the data bytes up to the application as soon as they are received.
I'm pretty sure intervening routers/firewalls could care less about the TCP PSH flag and aren't looking at it other than maybe for logging purposes in the case of "TCP out of state".
A classic example of an application that will request a PSH from TCP is an SSH or telnet connection, where individual keystrokes are being typed, and they don't appear on the screen until they are acknowledged (or I guess you could say "reflected") by the other side. Obviously the user wants to see their individual keystrokes appear immediately (PSH) as they are typed and not type a bunch of characters, see a slight delay, then all the typed characters appear at once.
The URG flag was originally intended to allow certain critical TCP segments to "jump the queue" ahead of other segments associated with that connection. So as an example suppose a client has sent 65Kbytes of data in a TCP connection, and the TCP send window has closed; in other words no ACK has been received yet so the sender has to stop and wait for an ACK from the receiver before more data can be sent. So there are a bunch of TCP segments queued up on the receiver and it is handling them in its usual FIFO fashion. Now suppose that while the sender is waiting, the connection is forcibly killed (reset) on the sending end. The sender will send a TCP RST, and set the URG flag as well. When the receiver gets this RST URG segment, it is supposed to jump it over all the currently-queued segments (thus violating FIFO) and process it first. The receiver sees the RST, kills the connection on its end, and doesn't bother to process all the queued segments for that connection, it simply throws them away and doesn't waste its time trying process segments for a connection that it knows is already dead. Without URG set if the RST segment had to wait at the end of the queue FIFO style to get processed, the receiver would waste its time processing all those queued segments (sending ACKs too), only to eventually figure out that the connection was already considered dead by the sender and it could care less.
As Dameon said URG was part of the original TCP specification, but was never really implemented consistently by the various vendors. I don't think intervening routers or firewalls care about the URG flag either, but I suppose some traffic shaping or QoS mechanisms might have the capability to look for the URG flag and give that segment some kind of special priority, however I don't recall ever seeing anything like that in practice.
Gateway Performance Optimization R81.20 Course
now available at maxpowerfirewalls.com