General-purpose large language models (LLMs), despite their broad capabilities, often struggle with specialized domain knowledge. This gap hinders their deployment as reliable research agents in demanding fields such as astronomy. Building on our prior work with AstroSage-Llama-3.1-8B, this study introduces AstroSage-Llama-3.1-70B, a 70-billion parameter domain-specialized natural-language AI assistant. It is designed for research and education across astronomy, astrophysics, space science, astroparticle physics, cosmology, and astronomical instrumentation. Developed from the Meta-Llama-3.1-70B foundation, AstroSage-Llama-3.1-70B underwent extensive continued pre-training (CPT) on a vast corpus of astronomical literature, followed by supervised fine-tuning (SFT) and model merging. We integrated reasoning chains into the SFT dataset, enabling AstroSage-Llama-3.1-70B to either answer the user query immediately, or first emit a human-readable thought process. Evaluated on a validated subset of 3,846 questions from the AstroMLab-1 benchmark (Ting et al., 2024) – derived from literature withheld during training – AstroSage-Llama-3.1-70B achieves top-tier performance (89.0%), matching GPT-5.2, Claude-4.5-Opus, and Gemini-3-Pro while being more cost-efficient. This work demonstrates that domain specialization, when applied to large-scale models, can enable them to outperform generalist counterparts in specialized knowledge areas like astronomy, thereby advancing the frontier of AI capabilities in the field.