# Digital Grease — Frequently Asked Questions At least, what we think you'll ask... ## Positioning & Identity What we are (and aren’t) — and what we’re here to fix. Q: So… are you IT, software, or consultants? A: We're creative problem solvers for field-driven operations. Not IT support. Not a software vendor. We find friction, fix it through process improvement and smart tools (including AI), then explain it in plain English so you can actually use it. Q: Who gets the most value working with you? A: Field-driven teams — construction, maintenance, logistics, utilities, public works — people that work in the physical world. We implement tech so it fits how work actually happens in reality. Q: Who actually does the work? A: You work directly with me. I lead the work end‑to‑end. When we need extra hands or specialized expertise, we bring in the right people — but direction and accountability stay put. Q: How are you different from typical consultants? A: I don't focus on what other consultants are doing — I focus on what my clients need. Find the friction, fix the process, add tools only if they help. The goal is solving problems and getting results, not checking boxes, writing reports, and racking up billable hours. One practical fix at a time. Q: Do you focus on AI, or operations? A: Both — and they're becoming inseparable. It's like asking your mechanic whether they focus on the engine or the transmission. Either one can work fine on its own — but if the car's going to get you anywhere, they need to work together. Same here. The way AI is evolving, it's not just a tool you bolt on anymore. It's becoming part of how operations run, how decisions get made, and how systems talk to each other. You can't improve one without thinking about the other. We figure out which needs attention first and build from there. Q: How are you different from IT consultants or managed service providers? A: We don't manage your infrastructure, handle tickets, or sell software licenses. We solve operational problems that happen to involve technology — process friction, disconnected systems, wasted time. We fix the problem, not just the tech. Q: We tried some AI tools and they didn't really do much for us. What would be different? A: The tools have gotten a lot better — but the process still matters more. A top-of-the-line cordless drill with 17 speed settings doesn't help if the job calls for a sledge hammer or a saw. Understanding your process, what outcome you're actually after, and where things can go wrong before you pick a tool — that's the part most people skip. That's where we start. See Operational Assessment and AI-101. Q: Can't we just do this ourselves? A: Maybe. If the ops side is broken, you either haven't seen it or haven't had the bandwidth to fix it — and that won't change on its own. The tech and AI is always improving, but it's far from plug-and-play. Knowing where it works, where it doesn't, how to wire it into systems that weren't built to talk to each other, and how to make sense of data that's scattered across platforms — if you've got the time and the skills, have at it. But most don't have the time, and it can be a hard way to learn that you didn't know what you didn't know. Q: What about AI agents? A: They're real — and they're coming faster than most people think. An AI agent is software that acts on its own: pulling data, drafting documents, routing work, monitoring systems. When they're built right, they're a force multiplier. When they're not, they're expensive and unpredictable — and at worst, they can do real damage. The challenge isn't the AI — it's whether your processes are documented, your data is accessible, and your systems can actually talk to each other. We help you get that foundation right so agents have something solid to work on. See Test Bench & Toolbox and Fractional AI Officer. ## Where to Start Low‑risk ways to engage without overcommitting. Q: What if we don't know where to start? A: Start with an Operational Assessment. It's a fast diagnostic of how things actually run, where the friction is, and what's worth fixing first. If your team is still wrapping their heads around AI, AI-101 is an excellent place to start to get their gears turning. Q: Can we start small? A: Yes. We pick one process, try one fix, and judge by results — not hype. Quick wins first; then scale what proves itself. See Test Bench & Toolbox. Q: Do we need clean data or documentation before we start? A: No. If docs are light, we capture how things actually work first. That becomes the base for reliable automation. See Process Documentation and Digital Tune‑Up & Connect. Q: What does a first conversation look like? A: It's a conversation. You tell me what's going on — what's on your mind, what you're interested in, what you're not sure about. I ask questions and give you honest reactions. If it makes sense to keep talking, we figure out a next step. If not, no hard feelings. ## Delivery & Workflow How we work together and what it feels like. Q: Is this meetings and slides, or hands‑on work? A: Both — but only what’s useful. Short working sessions plus hands‑on mapping, cleanup, and small builds. The format flexes; the goal doesn't: things actually get done. Q: Do you work remotely or on-site? A: It depends on what needs to get done. A lot of things can be handled remotely, but sometimes being there works better. We'll figure out the best approach based on what has to get done. Q: Do you work with leadership or frontline staff? A: Both, together. Leadership sets direction; staff shows reality. We validate both before changing anything. Q: Do you stick around long enough for habits to form? A: Yes. If a team needs help turning a new idea or tool into habit, ongoing coaching is part of the work — not a separate upsell. See People & Change. Q: How are engagements structured — project, retainer, or something else? A: Whatever makes sense. Some engagements have a clear scope — an Operational Assessment, for example — and run with deliverables and an end date. Others are a few focused sessions, like AI-101. And some are ongoing relationships by definition, like a Fractional AI Officer. Each service page describes a typical engagement, so you can get a feel for what fits. We don't push a structure. It follows the need. ## Culture, Change & Adoption Introducing AI and change without drama. Q: How do you avoid spooking people about AI? A: We’re direct: the goal isn’t to replace people — it’s to remove the draining parts of the job. When tools delete hassle, people lean in. AI-101 helps non‑technical teams see it for themselves. Q: How do you make sure new ways of working stick? A: Align incentives, document the new flow, and support managers. People & Change keeps good habits from sliding back. Process Documentation makes the new way clear. Q: What is The Grease Shop? A: A low‑pressure space to get experimental, with enough structure so good ideas don’t die on the whiteboard. Great for exploring automation, AI use cases, or new process ideas before committing. See The Grease Shop. ## Cost, Time & Risk Expectations without drama. Q: How long do things usually take? A: Operational Assessment: 2–4 weeks. Process Documentation: usually 1–3 workflows at a time. AI‑101: 2–4 short sessions. Test Bench & Toolbox: often days or weeks, not months. Goal: visible progress quickly. Q: How much time will you need from our staff? A: Short, focused blocks — usually a few hours per week during discovery and testing, plus follow‑ups between sessions. We protect day jobs. Q: Are we talking $5K or $500K? A: We don’t publish pricing. We sequence work to fit reality: start small, prove value, then decide if a larger investment makes sense. If it demands enterprise scale, we’ll structure it — after the early work proves it's worth it. Q: What if a prototype doesn’t pan out? A: Then it did its job: fail cheaply, learn quickly, move on. The goal is answers, not sunk cost. See Test Bench & Toolbox. Q: What happens to our data if AI tools get involved? A: If we're even talking about AI and data, it's because we've already determined AI can improve things — and that isn't always the case. Sometimes a small adjustment to your current process is all you need. If AI is part of the answer, the data has to be solid before it touches a model. Bad data doesn't just make AI ineffective — it can produce outcomes that actively damage your business. Getting the foundation right comes first. And the system has to protect your data from the AI, not just the other way around. Leaks, errors, anything leaving your control without agreement — all of it gets addressed up front. Compliance with industry regulations, legal requirements, and your internal standards is built in from day one. Q: Aren't sensors and robotics expensive to get started with? A: Sensors make sense almost anywhere you're currently going without data — or collecting it by hand once a shift. A $50 sensor with a $10/month data plan can report every few minutes on equipment runtime, temperature, vibration, pressure — whatever matters. Equipment telemetry alone shifts you from reactive maintenance to predictive. Production tracking goes from end-of-shift counts to real-time visibility. You can't improve what you can't measure, and most operations have huge gaps in what they're actually measuring. Robotics is a bigger conversation, but it's one you need to start having. The hardware is getting cheaper, the use cases are getting more practical, and your competitors are already looking at it. The key is making sure your processes are solid first — automating a bad process just speeds up waste. We help you figure out where you actually are and what makes sense next. See Robots & Sensors. ## Credibility & Fit Who we’re best for — and how we handle trust. Q: Have you done this before? A: Depends what you mean by this. I've spent 20+ years in field operations — managing crews, running maintenance programs, and fixing processes that weren't working. Then 6+ years building software and IoT hardware for construction and fleet management. AI consulting is the newest layer, but it sits on top of all of it. See About for the full story. What ties it together: I understand that processes must match real-world conditions, not whiteboard versions of them. That people and systems have to work together or both fail. In field operations, being wrong isn't abstract — it's physical. And I've sat on both sides of technology — the people using it and the people building it — so I know how to bridge them. Q: Do you sign NDAs and handle confidential processes? A: Yes. We align up front on data handling, access, and scope. Discretion is part of the job. Q: Who might not be a great fit? A: Teams that want a giant pitch deck, a big‑bang rollout, or a vendor to “own everything.” We start small, prove value, and scale what works. If you want theater, we’re not your shop. Q: Is a Fractional AI Officer overkill — or not enough? A: Ask yourself three things: Is AI already showing up in a lot of your decisions and future planning? Are you big enough that it matters, but not big enough to justify a full-time AI executive? Do your internally capable staff members already have full plates with their "real jobs"? If all three are yes, a Fractional AI Officer might be the perfect fit. And it's not a one-time engagement. AI doesn't work like a remodel where the contractor finishes and you don't need them for 15 years. It's ongoing, it's evolving, and someone needs to be thinking about it consistently. Q: What industries have you worked with? A: Public sector field operations — highway maintenance, fleet management, and crews in the field. Tech and software — SaaS platforms and IoT hardware built for construction and large contractors. The common thread: physical, operations-heavy work where people, process, and tech have to line up or nothing works. Q: Where are you based, and do you work outside your area? A: Indiana. A lot of work can happen remotely and that works fine — discovery, planning, advisory, most of it. If something needs to happen on-site, we make it happen. We'll figure out what makes sense as we go. ## Services — Quick Answers Short, practical answers tied to each service. Q: What do you actually check in an Operational Assessment? A: Data hygiene, documentation, workflow clarity, tool stack, access/permissions, and low‑risk spots where automation or AI can help without creating rework. You leave with prioritized fixes and next steps. See Operational Assessment. Q: Is AI-101 a lecture or real examples? A: Short, interactive sessions with real tasks in your existing tools. People learn where AI helps — and where it doesn’t. See AI-101. Q: What do we get from Process Documentation? A: Clear maps of how things actually work (including workarounds), in formats people can use and machines can reference later. Supports training, QA, and future automation. See Process Documentation. Q: When should we use Digital Tune‑Up & Connect? A: When tools are scattered or disconnected. We clean up configuration, permissions, and integrations so your stack stops fighting itself. See Digital Tune‑Up & Connect. Q: What’s the difference between a Tool and a Prototype in Test Bench & Toolbox? A: Tools are quick, lightweight utilities that help now (often AI‑assisted). Prototypes test a process you might scale later — built to prove value in weeks, not months. See Test Bench & Toolbox. Q: Why include People & Change? A: New systems fail if people don’t adopt them. People & Change aligns incentives, clarifies impact, and supports managers so the new way sticks. Q: Is an Operational Roadmap just a deck or an actual plan? A: An actionable sequence with owners, prerequisites, and quick wins — not just slides. Clarifies what to do now vs later, and helps avoid overbuilding. See Operational Roadmap. Q: What does a Fractional AI Officer do? A: Ongoing guidance, prioritization, and vendor sanity‑checks without hiring a full‑time exec. We keep momentum between projects and make sure work keeps paying off. See Fractional AI Officer. Q: Who needs a Fractional AI Officer? A: Companies where AI matters but not enough for a full-time exec — and where capable people already have full plates with their real jobs. See Fractional AI Officer. Q: Where do robots and sensors actually make sense for a company like ours? A: Sensors make sense anywhere you're going without data because it's too hard for a person to collect — they're cheap and practical now. Robots might not make sense yet, but they will soon, and you'll want to start preparing. See Robots & Sensors. Q: What’s The Grease Shop for? A: A safe place to bring ideas and try them without rollout pressure — curiosity with guardrails. It’s creative, but structured enough that good ideas don’t die on the whiteboard. See The Grease Shop. Q: Do you integrate with our existing systems? A: Yes — we design around what you already have. APIs, webhooks, exports, lightweight connectors. If integration gets complex, we bring in specialists and manage the work. See Digital Tune-Up & Connect and Test Bench & Toolbox. Q: Which AI tools should we actually be using? A: Depends on your problems, your systems, and your team. There's no universal answer, and anyone who gives you one is selling something. We look at what you're already paying for, what your actual pain points are, and where a tool — AI or otherwise — would make a measurable difference. Sometimes it's a new platform. Sometimes it's turning on a feature in software you already own. And if you know where you're headed longer-term, it may make sense to get the right tools in place now rather than rip and replace later. An Operational Assessment sorts out where you are. An Operational Roadmap helps plan where you're going. ## Do Now! Things you can do right now. Q: Do we need an AI usage policy even if nobody's officially using AI? A: People are using it. That's not a bad thing — it's probably a good thing. But without guardrails, they could put company data, client details, or pricing into tools you don't control and create exposure nobody intended. A simple one-page policy — what's allowed, what's off-limits, when to ask — takes an afternoon and keeps everyone out of trouble. If you want to help people use AI tools effectively and responsibly, that's where AI-101 comes in. Q: Should we track what AI tools people are using on their own? A: Lightly, yes — and not just for the obvious reason. If people are using tools outside your systems, you need to understand what data is going where. But more importantly, pay attention to what they're reaching for and why. That's your team telling you where the pain points are and where opportunities exist. A lot of what they're looking for may already be available in software you're paying for, or there may be a better way to solve it entirely. AI-101 helps people find and use what's already there. If real opportunities surface, Test Bench & Toolbox can validate them safely. Q: Should someone internally be keeping an eye on AI, even before we hire outside help? A: Doesn't hurt. If you're a smaller operation, one person with a line to leadership is enough — someone tech-comfortable who can notice where AI is showing up and flag concerns or opportunities. For larger organizations, it may make sense to have a few people across different functions keeping tabs, or even a single person for whom this is a significant part of their role. Either way, it's awareness and clear ownership, not a committee. If you later want broader team exposure, that's where AI-101 comes in. Q: Should we be thinking about how AI changes roles and hiring over the next few years? A: Yes. AI is changing what roles look like, what skills matter, and what can be automated — and that includes physical work, not just office tasks. Robotics and sensors are getting cheaper and more practical every year. Meanwhile, experienced people are retiring and taking decades of knowledge with them. You don't need to rewrite your org chart, but you should be thinking about which tasks shift, which skills you'll need more of, and where technology fills gaps that hiring alone can't. If you want to formalize this, that's an Operational Roadmap or Robots & Sensors conversation. Q: What should leadership focus on right now to avoid falling behind? A: Three things: - Pay attention to real applications, not hype. Ignore the noise. Look at what's actually working in your industry. - Shore up the foundation. Fix the obvious stuff — outdated spreadsheets, software nobody uses correctly, permissions that don't make sense. And start documenting how your experienced people actually do things before that knowledge walks out the door. - Encourage small experiments with guardrails. Let people try things. Just make sure there's a policy in place first. An Operational Assessment covers a lot of this ground. Process Documentation handles the knowledge capture. And yes — calling Digital Grease doesn't hurt either. Q: Do we need rules about what data is safe to put into AI tools? A: Yes. If someone pastes client information, internal pricing, or sensitive operational details into an AI tool, you've lost control of where that data ends up and who can see it. Depending on your contracts and industry, that could be a compliance issue — or a legal one. Define what's safe, what's restricted, and who approves exceptions. Simple, clear, enforceable. This eventually folds into broader Process Documentation or assessment work. Q: Should someone review AI-generated content before it goes to a client or into the field? A: Always. AI writes confidently and gets things wrong — sometimes because the model made it up, sometimes because it was fed bad context and didn't know any better. Either way, a human who knows the subject catches what AI can't. Eventually you can build systems with automated checks and workflow gates — that's Test Bench & Toolbox or Operational Roadmap territory. But right now, the rule is simple: a person reviews it before it leaves your hands. Q: Our best people are retiring. How do we keep what they know from walking out the door? A: You don't — not completely. But you can capture a lot more than most companies think. The trick is doing it before the person leaves, not after. Sit down with your experienced people and document how they actually do things — not the official procedure, the real one. The shortcuts, the judgment calls, the "I just know" decisions. That becomes training material, onboarding context, and a foundation for tools that help less experienced people make better calls. This is exactly what Process Documentation is for. If you're thinking about it, the time to start is now — not at the retirement party. Q: Should we be budgeting for AI costs? A: Yes. New AI tools keep rolling out, and the tools you already use keep adding AI features. You're going to use them. Plan for an increase in spend — and expect that increase to be more than offset by savings elsewhere, if the tools are being used effectively. Either way, it shouldn't be a surprise on someone's expense report. Q: We're paying for AI features in our software that nobody's using. Should we care? A: Yes — because you're already spending the money. Microsoft 365, Google Workspace, your CRM, your project management tools — most of them have added AI features your team has never touched. Before you buy anything new, figure out what's already there. Sometimes turning on what you're paying for solves the problem you were about to spend money on. AI-101 helps your team see what they've got. Digital Tune-Up & Connect makes sure it's configured right.