Skip to content

Commit 837e4a7

Browse files
committed
tweaks
1 parent 1080520 commit 837e4a7

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

index.html

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -204,15 +204,15 @@ <h2>Agenda (Mar 22, 2026, ASPLOS'26)</h2>
204204
<div class="time">9:00 – 9:30</div>
205205
<div class="content">
206206
<h3>Session 1: Introduction to Jaseci</h3>
207-
<p>Why is building production-grade AI applications still so hard? This session examines the fundamental gaps in today's AI development stack and shows how Jaseci addresses them. We introduce the Jac language, the Jaseci runtime, and the Open Service Platform, explaining how they work together to remove boilerplate and let developers focus on logic, not infrastructure.</p>
207+
<p>An overview of the Jac language, Jaseci runtime, and Open Source Ecosystem, and how they address the fundamental gaps in modern AI application development.</p>
208208
</div>
209209
</div>
210210

211211
<div class="timeline-item">
212212
<div class="time">9:30 – 10:00</div>
213213
<div class="content">
214214
<h3>Session 2: Agentic AI - Beyond Prompt Engineering</h3>
215-
<p>Prompt engineering is fragile, repetitive, and hard to maintain. This session explores how Jaseci's <code class="code-hl">by llm()</code> construct replaces hand-crafted prompts with semantic type annotations, where the compiler infers and generates the right prompt automatically. Attendees will see how this shifts AI development from trial-and-error prompting to structured, type-safe reasoning.</p>
215+
<p>How Jaseci's <code class="code-hl">by llm()</code> construct uses semantic type annotations to auto-generate prompts, replacing fragile hand-crafted prompting with structured, type-safe reasoning.</p>
216216
</div>
217217
</div>
218218

@@ -227,16 +227,16 @@ <h3>Coffee Break</h3>
227227
<div class="time">10:30 – 11:45</div>
228228
<div class="content">
229229
<h3>Session 3: Agentic AI - Building Real Workflows</h3>
230-
<p>Designing multi-agent systems is notoriously complex. Coordinating agents, managing state, and wiring everything together creates enormous overhead. In this hands-on session, attendees build agentic workflows using Jaseci's walker-on-graph model, where agents traverse data graphs natively. The goal is to experience firsthand how much complexity Jaseci's abstractions eliminate compared to conventional approaches.</p>
230+
<p>Hands-on experience building multi-agent workflows using Jaseci's walker-on-graph model, where agents traverse data graphs natively without the usual coordination and state management overhead.</p>
231231
</div>
232232
</div>
233233

234234
<div class="timeline-item">
235235
<div class="time">11:45 – 12:30</div>
236236
<div class="content">
237237
<h3>Session 4: Research Frontiers</h3>
238-
<p><strong>Kernel Forge:</strong> Writing GPU kernels by hand is a major barrier to optimizing deep learning models. Kernel Forge closes this gap by automatically synthesizing and tuning GPU kernels for PyTorch models, requiring no prior knowledge of CUDA or kernel programming.</p>
239-
<p><strong>GraphMend:</strong> <code class="code-hl">torch.compile()</code> often silently falls back to eager mode when it encounters unsupported patterns, undermining performance. GraphMend applies source-level code transformations to detect and fix these graph breaks, ensuring models compile fully and run at peak efficiency.</p>
238+
<p><strong>Kernel Forge:</strong> Automatically synthesizes and optimizes GPU kernels for PyTorch models, removing the need for any CUDA or kernel programming expertise.</p>
239+
<p><strong>GraphMend:</strong> Applies source-level code transformations to eliminate FX graph breaks in <code class="code-hl">torch.compile()</code>, preventing silent fallbacks to eager mode and ensuring models run at full compiled performance.</p>
240240
</div>
241241
</div>
242242

0 commit comments

Comments
 (0)