Bahasa Assembly di Ubuntu
Buku ini berisi referensi untuk kuliah pemrograman bahasa assembly untuk tingkat perguruan tinggi. Set instruksi yang dibahasa adalah x86-64 untuk prosesor x86-64 menggunakan sistem operasi Ubuntu 64 bit.
Kode di buku ini diuji di Ubuntu versi 22.04 LTS
Source:
Studi penggunaan AI oleh pendidik
Today we’re releasing new @AnthropicAI research on how educators use AI, analyzing ~74,000 conversations from professors using @claudeai in collaboration with Northeastern University.
4 initial findings…
- Educators are builders, not just users of AI. Faculty are creating interactive chemistry simulations, grading rubrics, and data dashboards with Claude Artifacts.
- Educators automate the drudgery while staying in the loop for almost everything else. 77% of teaching and classroom instruction uses are collaborative, while 65% of financial/fundraising tasks are fully delegated. High-touch educational work remains human-centered.
- Notable tension in the data: 49% of grading conversations showed automation patterns, yet faculty rated this as AI’s least effective application. This disconnect highlights ongoing debates around appropriate AI use in assessment.
- AI is forcing pedagogical change. Professors are completely redesigning assessments—one shared: “I will never again assign a traditional research paper,” instead creating assignments requiring critical thinking even with AI assistance.
NVidia Jet Nemotron
Abstrak
Abstract: We present Jet-Nemotron, a new family of hybrid-architecture language models, which matches or exceeds the accuracy of leading full-attention models while significantly improving generation throughput. Jet-Nemotron is developed using Post Neural Architecture Search (PostNAS), a novel neural architecture exploration pipeline that enables efficient model design. Unlike prior approaches, PostNAS begins with a pre-trained full-attention model and freezes its MLP weights, allowing efficient exploration of attention block designs. The pipeline includes four key components: (1) learning optimal full-attention layer placement and elimination, (2) linear attention block selection, (3) designing new attention blocks, and (4) performing hardware-aware hyperparameter search. Our Jet-Nemotron-2B model achieves comparable or superior accuracy to Qwen3, Qwen2.5, Gemma3, and Llama3.2 across a comprehensive suite of benchmarks while delivering up to 53.6× generation throughput speedup and 6.1× prefilling speedup. It also achieves higher accuracy on MMLU and MMLU-Pro than recent advanced MoE full-attention models, such as DeepSeek-V3-Small and Moonlight, despite their larger scale with 15B total and 2.2B activated parameters.