Skip to main content

View Diary: A Software Engineer's take on (145 comments)

Comment Preferences

  •  It isn't that simple (0+ / 0-)

    Businesses do what they do because it works.  Changing to match a vendor's vision of how they should be doing their business at minimum involves a massive disruption - you have to redesign your data structures, retrain your workforce (warehouse, factory, sales, procurement, logistics etc) etc.

    You also have to meet contractural obligations.  As an example, our customers have contracts with us, they have contracts in some cases that predate us, were made with companies that no longer exist but whose responsibilities we picked up.   Also our products become obsolete within about 12 months, in my industry Moore's law is for pussies.  That also affects pricing structures.  This is translated into a pricing system that is extremely flexible.

    Oracle ERP's pricing system isn't up to the job.  For a while we had a homebrew system but eventually (with a lot of pain and very slow adoption for reasons mentioned in the first paragraph) adapted to the Vendavo system.  Which doesn't play especially well with Oracle's internals.  So we're still using our old custom pricelist and special pricing authorization logic inside the ERP.

    That's just one tiny part of the business.  Credit checks are managed by one product, taxes by another.  Front ends to B2B, B2C and CSO are all different vendors, adapted to our needs.   They communicate with each other using a mix of SOA, older APIs, direct database links and message oriented middleware.   App servers and web servers add more vendors.  That doesn't even consider the factory systems that supply inventory, which, among other things, capture a couple dozen test results during fabrication helpful for tying field problems to individual production runs and to gather metrics on yield, scrap and long term product quality that informs future manufacturing decisions.

    To create an order, ship it, fulfill it and get paid, an order involves products from at least 7 vendors, plus several apps we still wrote and maintain ourselves because NOTHING on the market approaches the feature set that evolved in house.

    Or to paraphrase your last paragraph.  Assuming an off the shelf ERP can manage OTC in a business of any size is idiocy.   Assuming any off the shelf vendor software can fit into the complexity that is a modern global manufacturing system without significant effort is also idiocy.

    •  Not exactly the optimal solution. (0+ / 0-)

      I've written "proxy" style modules using Oracle's template to reach existing/external/other_vendor pricing. The key is using the Item Master system -- set quantity=0, set the switch allowing 0-quantity pricing, define a sub-inventory that matches your price list.

      The sub-inventory can be a real table or a view that uses the "proxy." One can use another program altogether -- same as using a c program to set up a sales tax hook.

      Further, this is optimistic: "Businesses do what they do because it works."

      Business do what they do because nobody's got the balls to take the responsibility to make changes to improve the process. Very few processes remain anywhere near optimal as technology changes over time.

      I wrote a planning system for The Hartford back decades ago. It survived for 15 years. Same hardware. At the time I thought that was wonderful, but really they were just too cowardly to stand up to ITT and insist on building a new spreadsheet-driven system.

      There's also the issue of experience. Maintenance guys are generally not up to it for building new systems. The range of skills isn't there.

      •  Not so much optimistic as self-evident (1+ / 0-)
        Recommended by:

        If what a business does fails to work, the business fails.

        It might not be the best way, but it does actually work.

        Change is disruptive.  It is not just a matter of courage.  It is a matter of risk.  The more useful and stable a legacy system or business process is, the more resistance there is going to be to change it.

        As a rule, if you are going to change an existing business process (which includes most major software changes) the new process has to have some significant improvements over the old to encourage adoption and to allow the folks who are trying to do their job to overcome the inevitable problems and friction that occur whenever doing anything new.

        If you are changing for the sake of change, you get a lot of resentful people who will passive-aggressively resist the change even if there is some future improvement payoff.

        I've been in the role of business support, developer, system architect, process engineer (in several methodologies), QA guy and techie consultant.  I've supported my company transitioning its multinational manufacturing tech business from a "me too" product line to one of the two unquestioned leaders in the technology, absorbing or destroying what was literally hundreds of competitors at the start of my career to what is now a handful.

        I've seen every argument for change or not change that you can imagine.  I've seen management get duped twice by vendors that couldn't deliver at all (so we had to build the whole thing in-house from the wreckage), one vendor that outright lied about capability, an 20ish year long relationship with Oracle and their entire product line, watched vendor after vendor get bought out and their product abandoned by the parent company.  I've also seen some really cool stuff written both in house and by vendors and stitched in.

        We had a building full of order management people when I was hired.  Now we need only a few, to manage a business that has expanded 100 fold.  I've seen a contract system go from paper files to spreadsheets to a database to an actual half decent contract management system (once we got past the vendor that lied sigh).    I've seen factory systems that are absolutely astounding built and maintained by utter cowboys (that later went to more disciplined agile methodologies) because those systems must constantly change and can NOT have downtime (minutes cost millions, literally).      I've seen our product architecture change to match new marketing and promotion requirements to get us into the 21st century that required touching (and regression testing) nearly every IT system in the company.

        I've seen financial systems running the same chart of accounts for 15 years because even though our business has transformed it is so disruptive to change it we had to have a major ERP upgrade occur simultaneously to even consider the risk of touching it.

        It isn't a matter of being timid or a coward.  It's a matter of not fucking up in a way that costs the company millions of dollars in lost contracts, or causes an entire production run of defective product, or results in legal action taken against us, failing to meet regulatory requirements in a half dozen competing jurisdictions or pissing off your entire salesforce by screwing up their commission compensation formula.

        As for maintenance guys not having the skills to build new systems, that might be true of 24x7 type support individuals who are trained to a different skills set, but where I work the "maintenance guys" are called "subject matter experts" because they know both the business and the tech inside and out.   They're the people that keep exposing all the missing functionality in what most of the software companies try to sell us, and the ones that figure out how to work in new products into our existing systems without breaking it.

        •  "Subject matter experts" are not (0+ / 0-)

          database engineers or acceptably knowledgeable for how to structure programming systems.

          "Disruption" is a common excuse for running noncompetitive money-losing systems.

          The old systems do their financial management functions well enough. Process management, not so much. Quality and value go lost so the whole company becomes vulnerable.

          The usual result is that the laggard is absorbed in an M&A takeover. Crappy management systems make the company a target.

          •  I've done process management (1+ / 0-)
            Recommended by:
            Oh Mary Oh

            Years of it.  I've even made it work.

            I've seen theories come and go.  Continuous Improvement, TQM, ISO 9000, Six Sigma, "Do more with Less", avoid hidden factories blah blah blah.  Trained in all of them at one time or another, actually did the "Black Belt" thing for a couple years.

            The bottom line is that there are always "low hanging fruit" after a few years of using any methodology that are improved by using some other methodology.  Turns out different categories of processed do best under different process engineering.

            To use software design as an example, in an environment of rapidly changing requirements but deep business engagement with a fairly robust test and prod environment that can tolerate rapid rollbacks of faulty code, Agile methodologies have clear advantages.

            In an environment with apps where it isn't the end of the world if an outage occurs, where security is normally important but not critical and where the expense of properly staffing and equipping a data center (or multiple for disaster recovery) is prohibitive, Cloud architectures, even with current limitations, have clear advantages.

            In an environment where requirements are stable (change slowly) but extremely complex, where downtime costs the company money instantly or leaves it exposed to lawsuits or regulatory punishment, old-school waterfall methodology combined with a robust QA and six-sigma like process monitors and statistical controls truly shine.

            If you do not understand the advantages of all three, I question your understanding of process engineering.  

            My company uses all three broad approaches, with variations for specific situations, and we're the kind of company that crushed all of our opposition by executing better than they did most of the time, while being resilient in years when we made bad decisions or the environment went toxic (as in 2009).   We're a tech company, so it isn't like we're limping along by having the secret Coke formula, an addictive chemical in our products and 80 years of marketing a brand.

            So yeah, even though we change slowly in some areas, there is actually a reason for it, and it has not affected our agility where it matters.  Having a safe place to stand means you're more able to shift your weight without disaster.

   does not fit neatly into any of these categories.   It was developed under conditions where only Agile does well, but it really is the kind of application that normally falls under massive production control for safety and reliability (similar to the third example).   Rolling out an app like that in a hurry means the usual evolution to true requirements happens after go-live, with a process very similar to what we're seeing now.  It ALWAYS sucks.    Apps like that usually aren't perfect on release because it is pretty much impossible to understand it well enough while developing it if nothing like it exists already.   But rushing the process to an arbitrary deadline while being extremely vague on requirements, expected user base, etc is pretty much guaranteed to experience problems.

            • was set up to fail. (0+ / 0-)

              The left over moles from the Bush Administration got it rigged as a 55 contractor clusterfxck.


              That's utterly absurd. And these issues have nothing to do with AGILE vs. waterfall vs. the POC-with-stages approach favored by USAF Systems Command and Oracle Consulting.

              (I don't see "cloud" as an architecture except at the hardware and connectivity functions.)

              Identity Management should have gone to 1 vendor.

              The rest of the web site was a 120-day project with no significant complexity -- it front-ends existing systems and there is no need for it to write into their tables or do extra logical filtering.

              All would be well getting reports on sign-ups back at end-of-day.

              55 @&^%$%^^&*amned separate contractors............

              •  I agree it was set up to fail (0+ / 0-)

                I'm not sure that was avoidable given that Congress has oversight and half of it is hostile to the idea.

                I think we can agree that procurement was completely screwed up, the whole thing was sandbagged until after the 2012 elections and the entire project started entirely too late.    Given those front-loaded problems it's rather surprising anything was functional at all.

Subscribe or Donate to support Daily Kos.

  • Recommended (145)
  • Community (63)
  • Environment (42)
  • 2016 (42)
  • Republicans (37)
  • Elections (34)
  • Culture (34)
  • Bernie Sanders (33)
  • Memorial Day (31)
  • Media (26)
  • Climate Change (25)
  • Labor (25)
  • Education (24)
  • Hillary Clinton (24)
  • Trans-Pacific Partnership (23)
  • Barack Obama (23)
  • Spam (23)
  • Civil Rights (23)
  • GOP (22)
  • Science (20)
  • Click here for the mobile view of the site