I think the touch bar is potentially a very interesting input device (at least as hardware, depending on how well its software works), certainly more useful than stupid “F” keys, which should have been dropped from keyboards 20 years ago. The place where we need more keys is down by the thumbs and between the two hands, with smarter use of the regular modifiers and letter keys (more custom layers, etc.), not slow-to-reach keys at the top of the keyboard.
[As you probably remember, I favor a general-purpose keyboard design along the lines of:

]
I’d love to have more analog inputs included on every machine, in addition to (instead of as a replacement for) a keyboard with hardware buttons. If they could be relied on to exist, then application software could do a bunch of pretty neat stuff with it, especially if holding down modifier keys could change the touch bar content.
(My preference would be for physical analog inputs such as trackballs, mouse wheels, sliders, jog dials, etc. But those are not realistic to put on a laptop for space reasons.)
I wonder if Apple would consider selling stand-alone touch bars that could be used in conjunction with any arbitrary keyboard. Otherwise, the primary problem I have with the touch-bar is that it’s only on some devices, and seems unlikely to ever hit anything close to Mac market saturation, which means software creators can’t rely on users having it, which means they can’t make it a core feature of their apps, but only a secondary/alternative interface.
This is the same problem faced by e.g. the iPhone’s “force click” feature. Because it is only included on new devices, software authors can’t depend on it, which means that it can’t be used for critical features, which means that app authors don’t bother using it and phone customers don’t bother searching for force click implementations. This means it ends up being a bit of an annoying gimmick.
Touch screens in general can be *very* effective input devices for interfaces which focus on content over “chrome”. They primarily suck for typing and precise selection, but they’re pretty precise for picking up relative motions, and a mouse can’t come close to “multitouch” for adjusting multiple analog inputs at a time.
One nice thing about a touch bar is that if it is heavily user customizable, functions can still be very discoverable/memorable because it’s a screen (just look at them) even when they change from application to application, compared to functions applied to generically labeled hardware buttons. With displays getting bigger and bigger, mouse-navigated menus are becoming more and more difficult to drive. They require moving the mouse cursor away from its current position and then back, which requires moving the hands significantly away from the keyboard. It takes conscious attention to find menu options because the cursor is coming from a different place on screen every time. A touch bar by contrast has a cursor-independent placement in the physical world. Of course there are other alternative ideas like mouse-cursor-centered radial menus (as seen in some pro software), but these tend to see only niche uses.
* * *
Personally I wish we could have a 6–10 inch tablet in the middle of a split keyboard serving as both a touchpad for moving an external display cursor, and as a touchscreen with various user interface controls on it, and maybe even also as a drawing tablet with a stylus.
If such a thing could be relied on, some really kickass software could be made. Unfortunately right now we have these split tablet vs. PC user interface paradigms, and nearly nobody has been doing work on integrating the two in a meaningful way. (Microsoft’s version, where you have software designed for one or the other coexisting on the same device which can be put in one mode or another is a huge shit sandwich. Not what I’m talking about.)