People mistakes, inference for beginners, and a guide to PR for startups
Optimistic in the general and skeptical in the specific.
Good morning,
In today's edition, among other things:
Common early-stage people mistakes
Inference in AI for beginners
The EU AI Act: What you need to know about the new law
Digital Media Trends 2024
The ultimate guide to PR
Protocols for Excellent Parenting
Al agentic workflows
Strategy In The Era Of AI
The eight secrets to a (fairly) fulfilled life
Onwards!
Common early-stage people's mistakes
Early in the 0→1 journey, two things really matter: finding the right problem to solve and finding the right people to build the solution to that problem. The more mistakes I made in the latter, the more I know that you should never compromise on talent. This does not mean the more expensive, bigger logo and more experience, the better. It means that you are looking for high agency and high urgency people. They are very difficult to find, and mistakes will happen. Here’s a list of the most common ones. From the Team Plan by Index Ventures:
Insufficient focus on talent density
Forming a founding team that lacks technical DNA
Forgetting that no hire is better than a bad hire
Being reluctant to get rid of A-holes or B-players
Insufficiently focusing on diversity from the earliest stages
Over-indexing on loyalty to the early team when you need to bring in more specialized or experienced talent
Mistakes around people and hiring processes
Outsourcing early hiring rather than embracing founder-led recruiting
Assuming others can make hiring decisions and stepping back too soon from personally vetting all candidates
Not spotting when you need to hire an in-house recruiter
Hiring an inexperienced in-house recruiter
Inflating job titles, leading to resentment and attrition down the line
Being seduced by sexy brands on a resume rather than focusing on competencies and fit
Failing to establish and stick to compensation principles, seeing it as a win to hire cheaply, or conversely, by offering a sweetheart deal
Insufficiently focusing on onboarding
Failing to future-proof and not investing upfront where it matters
Being too slow to explicitly articulate the culture you want to build and the values that will underpin it
Not communicating a clear vision, mission and strategy, allowing fiefdoms to develop, which undermine collaboration
Building a tech stack for today’s scale and scope, which absorbs headcount and slows you down when you face tomorrow’s scale and scope
Not recognizing when professional financial and legal advice really matter and are worth the expense
Hiring into the wrong roles
Hiring a senior product leader too early, when the founder needs to personally own the product vision
Hiring a senior salesperson too early rather than embracing founder-led sales
Running key marketing and/or sales experiments through a generalist and therefore prematurely shutting down promising marketing and/or sales channels
Failing to hire into the right roles
Not having a superstar owning early Community and Customer Support/ Experience (CX) functions, leading to an inadequate loop from early user feedback into product and growth
Getting bogged down in operations by not hiring a Chief of Staff or Head of Business Operations (BizOps)
Not recognizing when, and in which roles, you need to shift from generalists to specialists
Reluctance to hire, or to properly partner with, an executive assistant (EA) as a way of creating leverage
Misallocating your time
Spending too much time on low priority stuff for your stage (e.g. attending tech conferences, media appearances, meeting potential investors)
Not investing in building and leveraging a full-stack network of advisors and mentors
Not monitoring or creating space for the physical, mental and emotional well-being of your team and yourself
Inference in AI for beginners
In artificial intelligence (AI) broadly and machine learning (ML) specifically, one crucial technique stands out as a driving force behind the remarkable advancements we witness across industries: inference. As AI systems become increasingly sophisticated, their ability to draw meaningful conclusions from vast amounts of data has become a game-changer. This is where inference plays a significant role.
What is Inference in AI?
Inference refers to the process of drawing conclusions based on evidence and reasoning. In the context of AI, inference allows systems to derive new knowledge from existing data, enabling them to make predictions, classifications, or decisions based on learned patterns. This sets inference apart from other AI/ML approaches like training, which involves learning patterns from labeled data. While training lays the foundation, inference is the critical step that applies those learned patterns to new, unseen data, unlocking valuable insights and enabling AI systems to “reason” and make informed decisions.
Types of Inference in AI:
It's essential to understand the different types of inference employed in AI systems:
Deductive Inference:
Deductive inference involves drawing logically certain conclusions from premises. It follows a top-down approach, starting with general rules and applying them to specific instances.
For example, if we know that all men are mortal and that Socrates is a man, we can deduce with certainty that Socrates is mortal. Deductive inference is often used in rule-based systems and expert systems, where hand-coded rules and heuristics guide the reasoning process.
Inductive Inference:
Inductive inference, on the other hand, involves drawing probable conclusions based on observed patterns. It follows a bottom-up approach, starting with specific observations and generalizing them to broader principles.
For instance, if we observe that the sun has risen every day in the past, we can inductively infer that the sun will likely rise tomorrow. Inductive inference is commonly used in machine learning, where models learn patterns from training data and make predictions on new, unseen data.
Abductive Inference:
Abductive inference focuses on inferring the most likely explanation for a set of observations. It involves reasoning from effects to causes, seeking the simplest and most probable explanation.
For example, if we observe that the grass is wet, we might abductively infer that it rained recently, as that is the most likely explanation based on our prior knowledge. Abductive inference is often employed in diagnostic systems and fault detection, where the goal is to identify the root cause of an observed problem.
If this sounds complex, just watch this :)
To understand how inference works in practice, let's break down the typical steps involved in the inference process:
Observe data/facts: The AI system takes in raw data or observations from various sources, such as sensors, databases, or user inputs.
Reason over the data: Using logic, learned patterns, and domain knowledge, the AI system analyzes and processes the data to extract meaningful insights and relationships.
Draw conclusions or make predictions: Based on the reasoning process, the AI system generates output in the form of conclusions, predictions, or decisions.
Act on the conclusions: The AI system integrates the inferred knowledge into its decision-making process, using it to guide actions or trigger further analysis.
To illustrate this process, consider an AI system designed to predict customer churn in a telecom company. The system observes data about a customer's usage patterns, service interactions, and demographic information. It then reasons over this data using a trained machine learning model, which has learned patterns from historical customer data. Based on these patterns, the model infers a high probability that the customer will churn in the near future. Acting on this inference, the system flags the customer for targeted retention efforts, such as personalized offers or proactive outreach.
You probably wonder, know, ok, but where I can/have to use inference? A couple of examples:
Keep reading with a 7-day free trial
Subscribe to Bartek Pucek to keep reading this post and get 7 days of free access to the full post archives.